00:00:00.001 Started by upstream project "autotest-nightly" build number 3920 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3295 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.103 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.104 The recommended git tool is: git 00:00:00.104 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.144 Fetching changes from the remote Git repository 00:00:00.145 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.186 Using shallow fetch with depth 1 00:00:00.186 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.186 > git --version # timeout=10 00:00:00.218 > git --version # 'git version 2.39.2' 00:00:00.218 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.168 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.180 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.192 Checking out Revision c396a3cd44e4090a57fb151c18fefbf4a9bd324b (FETCH_HEAD) 00:00:05.192 > git config core.sparsecheckout # timeout=10 00:00:05.202 > git read-tree -mu HEAD # timeout=10 00:00:05.219 > git checkout -f c396a3cd44e4090a57fb151c18fefbf4a9bd324b # timeout=5 00:00:05.243 Commit message: "jenkins/jjb-config: Use freebsd14 for the pkgdep-freebsd job" 00:00:05.243 > git rev-list --no-walk c396a3cd44e4090a57fb151c18fefbf4a9bd324b # timeout=10 00:00:05.368 [Pipeline] Start of Pipeline 00:00:05.385 [Pipeline] library 00:00:05.387 Loading library shm_lib@master 00:00:05.387 Library shm_lib@master is cached. Copying from home. 00:00:05.408 [Pipeline] node 00:00:05.422 Running on WFP19 in /var/jenkins/workspace/crypto-phy-autotest 00:00:05.424 [Pipeline] { 00:00:05.437 [Pipeline] catchError 00:00:05.438 [Pipeline] { 00:00:05.457 [Pipeline] wrap 00:00:05.472 [Pipeline] { 00:00:05.479 [Pipeline] stage 00:00:05.481 [Pipeline] { (Prologue) 00:00:05.691 [Pipeline] sh 00:00:05.972 + logger -p user.info -t JENKINS-CI 00:00:05.991 [Pipeline] echo 00:00:05.993 Node: WFP19 00:00:06.001 [Pipeline] sh 00:00:06.303 [Pipeline] setCustomBuildProperty 00:00:06.319 [Pipeline] echo 00:00:06.321 Cleanup processes 00:00:06.327 [Pipeline] sh 00:00:06.613 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:06.613 3324864 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:06.628 [Pipeline] sh 00:00:06.912 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:06.912 ++ grep -v 'sudo pgrep' 00:00:06.912 ++ awk '{print $1}' 00:00:06.912 + sudo kill -9 00:00:06.912 + true 00:00:06.926 [Pipeline] cleanWs 00:00:06.936 [WS-CLEANUP] Deleting project workspace... 00:00:06.936 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.945 [WS-CLEANUP] done 00:00:06.948 [Pipeline] setCustomBuildProperty 00:00:06.960 [Pipeline] sh 00:00:07.238 + sudo git config --global --replace-all safe.directory '*' 00:00:07.317 [Pipeline] httpRequest 00:00:07.352 [Pipeline] echo 00:00:07.354 Sorcerer 10.211.164.101 is alive 00:00:07.360 [Pipeline] httpRequest 00:00:07.365 HttpMethod: GET 00:00:07.365 URL: http://10.211.164.101/packages/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:07.366 Sending request to url: http://10.211.164.101/packages/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:07.368 Response Code: HTTP/1.1 200 OK 00:00:07.369 Success: Status code 200 is in the accepted range: 200,404 00:00:07.369 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:08.286 [Pipeline] sh 00:00:08.571 + tar --no-same-owner -xf jbp_c396a3cd44e4090a57fb151c18fefbf4a9bd324b.tar.gz 00:00:08.583 [Pipeline] httpRequest 00:00:08.613 [Pipeline] echo 00:00:08.614 Sorcerer 10.211.164.101 is alive 00:00:08.619 [Pipeline] httpRequest 00:00:08.622 HttpMethod: GET 00:00:08.623 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:08.623 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:08.625 Response Code: HTTP/1.1 200 OK 00:00:08.626 Success: Status code 200 is in the accepted range: 200,404 00:00:08.626 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:29.062 [Pipeline] sh 00:00:29.346 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:32.647 [Pipeline] sh 00:00:32.927 + git -C spdk log --oneline -n5 00:00:32.927 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:00:32.927 fc2398dfa raid: clear base bdev configure_cb after executing 00:00:32.927 5558f3f50 raid: complete bdev_raid_create after sb is written 00:00:32.927 d005e023b raid: fix empty slot not updated in sb after resize 00:00:32.927 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:00:32.942 [Pipeline] } 00:00:32.959 [Pipeline] // stage 00:00:32.967 [Pipeline] stage 00:00:32.969 [Pipeline] { (Prepare) 00:00:32.985 [Pipeline] writeFile 00:00:33.005 [Pipeline] sh 00:00:33.287 + logger -p user.info -t JENKINS-CI 00:00:33.303 [Pipeline] sh 00:00:33.589 + logger -p user.info -t JENKINS-CI 00:00:33.605 [Pipeline] sh 00:00:33.893 + cat autorun-spdk.conf 00:00:33.893 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.893 SPDK_TEST_BLOCKDEV=1 00:00:33.893 SPDK_TEST_ISAL=1 00:00:33.893 SPDK_TEST_CRYPTO=1 00:00:33.893 SPDK_TEST_REDUCE=1 00:00:33.893 SPDK_TEST_VBDEV_COMPRESS=1 00:00:33.893 SPDK_RUN_ASAN=1 00:00:33.893 SPDK_RUN_UBSAN=1 00:00:33.893 SPDK_TEST_ACCEL=1 00:00:33.901 RUN_NIGHTLY=1 00:00:33.907 [Pipeline] readFile 00:00:33.938 [Pipeline] withEnv 00:00:33.941 [Pipeline] { 00:00:33.961 [Pipeline] sh 00:00:34.247 + set -ex 00:00:34.247 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:00:34.247 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:34.247 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.247 ++ SPDK_TEST_BLOCKDEV=1 00:00:34.247 ++ SPDK_TEST_ISAL=1 00:00:34.247 ++ SPDK_TEST_CRYPTO=1 00:00:34.247 ++ SPDK_TEST_REDUCE=1 00:00:34.247 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:34.247 ++ SPDK_RUN_ASAN=1 00:00:34.247 ++ SPDK_RUN_UBSAN=1 00:00:34.247 ++ SPDK_TEST_ACCEL=1 00:00:34.247 ++ RUN_NIGHTLY=1 00:00:34.247 + case $SPDK_TEST_NVMF_NICS in 00:00:34.247 + DRIVERS= 00:00:34.247 + [[ -n '' ]] 00:00:34.247 + exit 0 00:00:34.258 [Pipeline] } 00:00:34.281 [Pipeline] // withEnv 00:00:34.288 [Pipeline] } 00:00:34.309 [Pipeline] // stage 00:00:34.321 [Pipeline] catchError 00:00:34.323 [Pipeline] { 00:00:34.341 [Pipeline] timeout 00:00:34.342 Timeout set to expire in 1 hr 0 min 00:00:34.344 [Pipeline] { 00:00:34.362 [Pipeline] stage 00:00:34.365 [Pipeline] { (Tests) 00:00:34.382 [Pipeline] sh 00:00:34.666 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:00:34.666 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:00:34.666 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:00:34.666 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:00:34.666 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:34.666 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:00:34.666 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:00:34.666 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:34.666 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:00:34.666 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:34.666 + [[ crypto-phy-autotest == pkgdep-* ]] 00:00:34.666 + cd /var/jenkins/workspace/crypto-phy-autotest 00:00:34.666 + source /etc/os-release 00:00:34.666 ++ NAME='Fedora Linux' 00:00:34.666 ++ VERSION='38 (Cloud Edition)' 00:00:34.666 ++ ID=fedora 00:00:34.666 ++ VERSION_ID=38 00:00:34.666 ++ VERSION_CODENAME= 00:00:34.666 ++ PLATFORM_ID=platform:f38 00:00:34.666 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:34.666 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:34.666 ++ LOGO=fedora-logo-icon 00:00:34.666 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:34.666 ++ HOME_URL=https://fedoraproject.org/ 00:00:34.666 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:34.666 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:34.666 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:34.666 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:34.666 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:34.666 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:34.666 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:34.666 ++ SUPPORT_END=2024-05-14 00:00:34.666 ++ VARIANT='Cloud Edition' 00:00:34.666 ++ VARIANT_ID=cloud 00:00:34.666 + uname -a 00:00:34.666 Linux spdk-wfp-19 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:34.666 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:00:38.879 Hugepages 00:00:38.879 node hugesize free / total 00:00:38.879 node0 1048576kB 0 / 0 00:00:38.879 node0 2048kB 0 / 0 00:00:38.879 node1 1048576kB 0 / 0 00:00:38.879 node1 2048kB 0 / 0 00:00:38.879 00:00:38.879 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:38.879 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:38.879 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:38.879 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:38.879 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:38.879 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:38.879 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:38.879 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:38.879 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:38.879 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:38.879 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:38.879 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:38.879 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:38.879 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:38.879 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:38.879 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:38.879 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:38.879 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:38.879 + rm -f /tmp/spdk-ld-path 00:00:38.879 + source autorun-spdk.conf 00:00:38.879 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.879 ++ SPDK_TEST_BLOCKDEV=1 00:00:38.879 ++ SPDK_TEST_ISAL=1 00:00:38.879 ++ SPDK_TEST_CRYPTO=1 00:00:38.879 ++ SPDK_TEST_REDUCE=1 00:00:38.879 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:38.879 ++ SPDK_RUN_ASAN=1 00:00:38.879 ++ SPDK_RUN_UBSAN=1 00:00:38.879 ++ SPDK_TEST_ACCEL=1 00:00:38.879 ++ RUN_NIGHTLY=1 00:00:38.879 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:38.879 + [[ -n '' ]] 00:00:38.879 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:38.879 + for M in /var/spdk/build-*-manifest.txt 00:00:38.879 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:38.879 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:38.879 + for M in /var/spdk/build-*-manifest.txt 00:00:38.879 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:38.879 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:38.879 ++ uname 00:00:38.879 + [[ Linux == \L\i\n\u\x ]] 00:00:38.879 + sudo dmesg -T 00:00:38.879 + sudo dmesg --clear 00:00:38.879 + dmesg_pid=3325930 00:00:38.879 + [[ Fedora Linux == FreeBSD ]] 00:00:38.879 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.879 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.879 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:38.879 + [[ -x /usr/src/fio-static/fio ]] 00:00:38.879 + export FIO_BIN=/usr/src/fio-static/fio 00:00:38.879 + FIO_BIN=/usr/src/fio-static/fio 00:00:38.879 + sudo dmesg -Tw 00:00:38.879 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:38.879 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:38.879 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:38.879 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.879 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.879 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:38.879 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.879 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.879 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:38.879 Test configuration: 00:00:38.880 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.880 SPDK_TEST_BLOCKDEV=1 00:00:38.880 SPDK_TEST_ISAL=1 00:00:38.880 SPDK_TEST_CRYPTO=1 00:00:38.880 SPDK_TEST_REDUCE=1 00:00:38.880 SPDK_TEST_VBDEV_COMPRESS=1 00:00:38.880 SPDK_RUN_ASAN=1 00:00:38.880 SPDK_RUN_UBSAN=1 00:00:38.880 SPDK_TEST_ACCEL=1 00:00:38.880 RUN_NIGHTLY=1 10:42:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:00:38.880 10:42:45 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:38.880 10:42:45 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:38.880 10:42:45 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:38.880 10:42:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.880 10:42:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.880 10:42:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.880 10:42:45 -- paths/export.sh@5 -- $ export PATH 00:00:38.880 10:42:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.880 10:42:45 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:00:38.880 10:42:45 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:38.880 10:42:45 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721896965.XXXXXX 00:00:38.880 10:42:45 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721896965.COPhYX 00:00:38.880 10:42:45 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:38.880 10:42:45 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:38.880 10:42:45 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:00:38.880 10:42:45 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:38.880 10:42:45 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:38.880 10:42:45 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:38.880 10:42:45 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:00:38.880 10:42:45 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.880 10:42:45 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:38.880 10:42:45 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:38.880 10:42:45 -- pm/common@17 -- $ local monitor 00:00:38.880 10:42:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.880 10:42:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.880 10:42:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.880 10:42:45 -- pm/common@21 -- $ date +%s 00:00:38.880 10:42:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:38.880 10:42:45 -- pm/common@21 -- $ date +%s 00:00:38.880 10:42:45 -- pm/common@25 -- $ sleep 1 00:00:38.880 10:42:45 -- pm/common@21 -- $ date +%s 00:00:38.880 10:42:45 -- pm/common@21 -- $ date +%s 00:00:38.880 10:42:45 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721896965 00:00:38.880 10:42:45 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721896965 00:00:38.880 10:42:45 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721896965 00:00:38.880 10:42:45 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721896965 00:00:38.880 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721896965_collect-vmstat.pm.log 00:00:38.880 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721896965_collect-cpu-load.pm.log 00:00:38.880 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721896965_collect-cpu-temp.pm.log 00:00:38.880 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721896965_collect-bmc-pm.bmc.pm.log 00:00:39.815 10:42:46 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:39.815 10:42:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:39.815 10:42:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:39.815 10:42:46 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:39.815 10:42:46 -- spdk/autobuild.sh@16 -- $ date -u 00:00:39.815 Thu Jul 25 08:42:46 AM UTC 2024 00:00:39.815 10:42:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:39.815 v24.09-pre-321-g704257090 00:00:39.815 10:42:46 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:39.815 10:42:46 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:39.815 10:42:46 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:39.815 10:42:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:39.815 10:42:46 -- common/autotest_common.sh@10 -- $ set +x 00:00:39.815 ************************************ 00:00:39.815 START TEST asan 00:00:39.815 ************************************ 00:00:39.815 10:42:46 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:00:39.815 using asan 00:00:39.815 00:00:39.815 real 0m0.001s 00:00:39.815 user 0m0.000s 00:00:39.815 sys 0m0.000s 00:00:39.815 10:42:46 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:39.815 10:42:46 asan -- common/autotest_common.sh@10 -- $ set +x 00:00:39.815 ************************************ 00:00:39.815 END TEST asan 00:00:39.815 ************************************ 00:00:40.073 10:42:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:40.073 10:42:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:40.073 10:42:46 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:40.073 10:42:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:40.073 10:42:46 -- common/autotest_common.sh@10 -- $ set +x 00:00:40.073 ************************************ 00:00:40.073 START TEST ubsan 00:00:40.073 ************************************ 00:00:40.073 10:42:46 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:40.073 using ubsan 00:00:40.073 00:00:40.073 real 0m0.000s 00:00:40.073 user 0m0.000s 00:00:40.073 sys 0m0.000s 00:00:40.073 10:42:46 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:40.073 10:42:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:40.073 ************************************ 00:00:40.073 END TEST ubsan 00:00:40.073 ************************************ 00:00:40.073 10:42:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:40.073 10:42:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:40.073 10:42:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:40.073 10:42:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:40.073 10:42:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:40.073 10:42:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:40.073 10:42:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:40.073 10:42:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:40.073 10:42:47 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:00:40.073 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:00:40.073 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:00:40.640 Using 'verbs' RDMA provider 00:00:56.884 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:11.767 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:11.767 Creating mk/config.mk...done. 00:01:11.767 Creating mk/cc.flags.mk...done. 00:01:11.767 Type 'make' to build. 00:01:11.767 10:43:17 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:11.767 10:43:17 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:11.767 10:43:17 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:11.767 10:43:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:11.767 ************************************ 00:01:11.767 START TEST make 00:01:11.767 ************************************ 00:01:11.767 10:43:17 make -- common/autotest_common.sh@1125 -- $ make -j112 00:01:11.767 make[1]: Nothing to be done for 'all'. 00:01:50.517 The Meson build system 00:01:50.517 Version: 1.3.1 00:01:50.517 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:01:50.517 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:01:50.517 Build type: native build 00:01:50.517 Program cat found: YES (/usr/bin/cat) 00:01:50.517 Project name: DPDK 00:01:50.517 Project version: 24.03.0 00:01:50.517 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:50.517 C linker for the host machine: cc ld.bfd 2.39-16 00:01:50.517 Host machine cpu family: x86_64 00:01:50.517 Host machine cpu: x86_64 00:01:50.518 Message: ## Building in Developer Mode ## 00:01:50.518 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:50.518 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:50.518 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:50.518 Program python3 found: YES (/usr/bin/python3) 00:01:50.518 Program cat found: YES (/usr/bin/cat) 00:01:50.518 Compiler for C supports arguments -march=native: YES 00:01:50.518 Checking for size of "void *" : 8 00:01:50.518 Checking for size of "void *" : 8 (cached) 00:01:50.518 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:50.518 Library m found: YES 00:01:50.518 Library numa found: YES 00:01:50.518 Has header "numaif.h" : YES 00:01:50.518 Library fdt found: NO 00:01:50.518 Library execinfo found: NO 00:01:50.518 Has header "execinfo.h" : YES 00:01:50.518 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:50.518 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:50.518 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:50.518 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:50.518 Run-time dependency openssl found: YES 3.0.9 00:01:50.518 Run-time dependency libpcap found: YES 1.10.4 00:01:50.518 Has header "pcap.h" with dependency libpcap: YES 00:01:50.518 Compiler for C supports arguments -Wcast-qual: YES 00:01:50.518 Compiler for C supports arguments -Wdeprecated: YES 00:01:50.518 Compiler for C supports arguments -Wformat: YES 00:01:50.518 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:50.518 Compiler for C supports arguments -Wformat-security: NO 00:01:50.518 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:50.518 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:50.518 Compiler for C supports arguments -Wnested-externs: YES 00:01:50.518 Compiler for C supports arguments -Wold-style-definition: YES 00:01:50.518 Compiler for C supports arguments -Wpointer-arith: YES 00:01:50.518 Compiler for C supports arguments -Wsign-compare: YES 00:01:50.518 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:50.518 Compiler for C supports arguments -Wundef: YES 00:01:50.518 Compiler for C supports arguments -Wwrite-strings: YES 00:01:50.518 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:50.518 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:50.518 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:50.518 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:50.518 Program objdump found: YES (/usr/bin/objdump) 00:01:50.518 Compiler for C supports arguments -mavx512f: YES 00:01:50.518 Checking if "AVX512 checking" compiles: YES 00:01:50.518 Fetching value of define "__SSE4_2__" : 1 00:01:50.518 Fetching value of define "__AES__" : 1 00:01:50.518 Fetching value of define "__AVX__" : 1 00:01:50.518 Fetching value of define "__AVX2__" : 1 00:01:50.518 Fetching value of define "__AVX512BW__" : 1 00:01:50.518 Fetching value of define "__AVX512CD__" : 1 00:01:50.518 Fetching value of define "__AVX512DQ__" : 1 00:01:50.518 Fetching value of define "__AVX512F__" : 1 00:01:50.518 Fetching value of define "__AVX512VL__" : 1 00:01:50.518 Fetching value of define "__PCLMUL__" : 1 00:01:50.518 Fetching value of define "__RDRND__" : 1 00:01:50.518 Fetching value of define "__RDSEED__" : 1 00:01:50.518 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:50.518 Fetching value of define "__znver1__" : (undefined) 00:01:50.518 Fetching value of define "__znver2__" : (undefined) 00:01:50.518 Fetching value of define "__znver3__" : (undefined) 00:01:50.518 Fetching value of define "__znver4__" : (undefined) 00:01:50.518 Library asan found: YES 00:01:50.518 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:50.518 Message: lib/log: Defining dependency "log" 00:01:50.518 Message: lib/kvargs: Defining dependency "kvargs" 00:01:50.518 Message: lib/telemetry: Defining dependency "telemetry" 00:01:50.518 Library rt found: YES 00:01:50.518 Checking for function "getentropy" : NO 00:01:50.518 Message: lib/eal: Defining dependency "eal" 00:01:50.518 Message: lib/ring: Defining dependency "ring" 00:01:50.518 Message: lib/rcu: Defining dependency "rcu" 00:01:50.518 Message: lib/mempool: Defining dependency "mempool" 00:01:50.518 Message: lib/mbuf: Defining dependency "mbuf" 00:01:50.518 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:50.518 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:50.518 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:50.518 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:50.518 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:50.518 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:50.518 Compiler for C supports arguments -mpclmul: YES 00:01:50.518 Compiler for C supports arguments -maes: YES 00:01:50.518 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:50.518 Compiler for C supports arguments -mavx512bw: YES 00:01:50.518 Compiler for C supports arguments -mavx512dq: YES 00:01:50.518 Compiler for C supports arguments -mavx512vl: YES 00:01:50.518 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:50.518 Compiler for C supports arguments -mavx2: YES 00:01:50.518 Compiler for C supports arguments -mavx: YES 00:01:50.518 Message: lib/net: Defining dependency "net" 00:01:50.518 Message: lib/meter: Defining dependency "meter" 00:01:50.518 Message: lib/ethdev: Defining dependency "ethdev" 00:01:50.518 Message: lib/pci: Defining dependency "pci" 00:01:50.518 Message: lib/cmdline: Defining dependency "cmdline" 00:01:50.518 Message: lib/hash: Defining dependency "hash" 00:01:50.518 Message: lib/timer: Defining dependency "timer" 00:01:50.518 Message: lib/compressdev: Defining dependency "compressdev" 00:01:50.518 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:50.518 Message: lib/dmadev: Defining dependency "dmadev" 00:01:50.518 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:50.518 Message: lib/power: Defining dependency "power" 00:01:50.518 Message: lib/reorder: Defining dependency "reorder" 00:01:50.518 Message: lib/security: Defining dependency "security" 00:01:50.518 Has header "linux/userfaultfd.h" : YES 00:01:50.518 Has header "linux/vduse.h" : YES 00:01:50.518 Message: lib/vhost: Defining dependency "vhost" 00:01:50.518 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:50.518 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:01:50.518 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:50.518 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:50.518 Compiler for C supports arguments -std=c11: YES 00:01:50.518 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:01:50.518 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:01:50.518 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:01:50.518 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:01:50.518 Run-time dependency libmlx5 found: YES 1.24.44.0 00:01:50.518 Run-time dependency libibverbs found: YES 1.14.44.0 00:01:50.518 Library mtcr_ul found: NO 00:01:50.518 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:01:50.518 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:01:50.518 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:01:50.518 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:01:50.518 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:01:50.518 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:01:50.518 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:01:50.518 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:01:50.518 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:01:50.518 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:01:50.518 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:01:50.518 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:01:50.518 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:01:50.518 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:01:50.518 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:01:55.784 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:01:55.784 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:01:55.784 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:01:55.785 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:01:55.785 Configuring mlx5_autoconf.h using configuration 00:01:55.785 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:01:55.785 Run-time dependency libcrypto found: YES 3.0.9 00:01:55.785 Library IPSec_MB found: YES 00:01:55.785 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:01:55.785 Message: drivers/common/qat: Defining dependency "common_qat" 00:01:55.785 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:55.785 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:55.785 Library IPSec_MB found: YES 00:01:55.785 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:01:55.785 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:01:55.785 Compiler for C supports arguments -std=c11: YES (cached) 00:01:55.785 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:55.785 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:55.785 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:55.785 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:55.785 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:01:55.785 Run-time dependency libisal found: NO (tried pkgconfig) 00:01:55.785 Library libisal found: NO 00:01:55.785 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:01:55.785 Compiler for C supports arguments -std=c11: YES (cached) 00:01:55.785 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:55.785 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:55.785 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:55.785 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:55.785 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:01:55.785 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:55.785 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:55.785 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:55.785 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:55.785 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:55.785 Program doxygen found: YES (/usr/bin/doxygen) 00:01:55.785 Configuring doxy-api-html.conf using configuration 00:01:55.785 Configuring doxy-api-man.conf using configuration 00:01:55.785 Program mandb found: YES (/usr/bin/mandb) 00:01:55.785 Program sphinx-build found: NO 00:01:55.785 Configuring rte_build_config.h using configuration 00:01:55.785 Message: 00:01:55.785 ================= 00:01:55.785 Applications Enabled 00:01:55.785 ================= 00:01:55.785 00:01:55.785 apps: 00:01:55.785 00:01:55.785 00:01:55.785 Message: 00:01:55.785 ================= 00:01:55.785 Libraries Enabled 00:01:55.785 ================= 00:01:55.785 00:01:55.785 libs: 00:01:55.785 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:55.785 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:55.785 cryptodev, dmadev, power, reorder, security, vhost, 00:01:55.785 00:01:55.785 Message: 00:01:55.785 =============== 00:01:55.785 Drivers Enabled 00:01:55.785 =============== 00:01:55.785 00:01:55.785 common: 00:01:55.785 mlx5, qat, 00:01:55.785 bus: 00:01:55.785 auxiliary, pci, vdev, 00:01:55.785 mempool: 00:01:55.785 ring, 00:01:55.785 dma: 00:01:55.785 00:01:55.785 net: 00:01:55.785 00:01:55.785 crypto: 00:01:55.785 ipsec_mb, mlx5, 00:01:55.785 compress: 00:01:55.785 isal, mlx5, 00:01:55.785 vdpa: 00:01:55.785 00:01:55.785 00:01:55.785 Message: 00:01:55.785 ================= 00:01:55.785 Content Skipped 00:01:55.785 ================= 00:01:55.785 00:01:55.785 apps: 00:01:55.785 dumpcap: explicitly disabled via build config 00:01:55.785 graph: explicitly disabled via build config 00:01:55.785 pdump: explicitly disabled via build config 00:01:55.785 proc-info: explicitly disabled via build config 00:01:55.785 test-acl: explicitly disabled via build config 00:01:55.785 test-bbdev: explicitly disabled via build config 00:01:55.785 test-cmdline: explicitly disabled via build config 00:01:55.785 test-compress-perf: explicitly disabled via build config 00:01:55.785 test-crypto-perf: explicitly disabled via build config 00:01:55.785 test-dma-perf: explicitly disabled via build config 00:01:55.785 test-eventdev: explicitly disabled via build config 00:01:55.785 test-fib: explicitly disabled via build config 00:01:55.785 test-flow-perf: explicitly disabled via build config 00:01:55.785 test-gpudev: explicitly disabled via build config 00:01:55.785 test-mldev: explicitly disabled via build config 00:01:55.785 test-pipeline: explicitly disabled via build config 00:01:55.785 test-pmd: explicitly disabled via build config 00:01:55.785 test-regex: explicitly disabled via build config 00:01:55.785 test-sad: explicitly disabled via build config 00:01:55.785 test-security-perf: explicitly disabled via build config 00:01:55.785 00:01:55.785 libs: 00:01:55.785 argparse: explicitly disabled via build config 00:01:55.785 metrics: explicitly disabled via build config 00:01:55.785 acl: explicitly disabled via build config 00:01:55.785 bbdev: explicitly disabled via build config 00:01:55.785 bitratestats: explicitly disabled via build config 00:01:55.785 bpf: explicitly disabled via build config 00:01:55.785 cfgfile: explicitly disabled via build config 00:01:55.785 distributor: explicitly disabled via build config 00:01:55.785 efd: explicitly disabled via build config 00:01:55.785 eventdev: explicitly disabled via build config 00:01:55.785 dispatcher: explicitly disabled via build config 00:01:55.785 gpudev: explicitly disabled via build config 00:01:55.785 gro: explicitly disabled via build config 00:01:55.785 gso: explicitly disabled via build config 00:01:55.785 ip_frag: explicitly disabled via build config 00:01:55.785 jobstats: explicitly disabled via build config 00:01:55.785 latencystats: explicitly disabled via build config 00:01:55.785 lpm: explicitly disabled via build config 00:01:55.785 member: explicitly disabled via build config 00:01:55.785 pcapng: explicitly disabled via build config 00:01:55.785 rawdev: explicitly disabled via build config 00:01:55.785 regexdev: explicitly disabled via build config 00:01:55.785 mldev: explicitly disabled via build config 00:01:55.785 rib: explicitly disabled via build config 00:01:55.785 sched: explicitly disabled via build config 00:01:55.785 stack: explicitly disabled via build config 00:01:55.785 ipsec: explicitly disabled via build config 00:01:55.785 pdcp: explicitly disabled via build config 00:01:55.785 fib: explicitly disabled via build config 00:01:55.785 port: explicitly disabled via build config 00:01:55.785 pdump: explicitly disabled via build config 00:01:55.785 table: explicitly disabled via build config 00:01:55.785 pipeline: explicitly disabled via build config 00:01:55.785 graph: explicitly disabled via build config 00:01:55.785 node: explicitly disabled via build config 00:01:55.785 00:01:55.785 drivers: 00:01:55.785 common/cpt: not in enabled drivers build config 00:01:55.785 common/dpaax: not in enabled drivers build config 00:01:55.785 common/iavf: not in enabled drivers build config 00:01:55.785 common/idpf: not in enabled drivers build config 00:01:55.785 common/ionic: not in enabled drivers build config 00:01:55.785 common/mvep: not in enabled drivers build config 00:01:55.785 common/octeontx: not in enabled drivers build config 00:01:55.785 bus/cdx: not in enabled drivers build config 00:01:55.785 bus/dpaa: not in enabled drivers build config 00:01:55.785 bus/fslmc: not in enabled drivers build config 00:01:55.785 bus/ifpga: not in enabled drivers build config 00:01:55.785 bus/platform: not in enabled drivers build config 00:01:55.785 bus/uacce: not in enabled drivers build config 00:01:55.786 bus/vmbus: not in enabled drivers build config 00:01:55.786 common/cnxk: not in enabled drivers build config 00:01:55.786 common/nfp: not in enabled drivers build config 00:01:55.786 common/nitrox: not in enabled drivers build config 00:01:55.786 common/sfc_efx: not in enabled drivers build config 00:01:55.786 mempool/bucket: not in enabled drivers build config 00:01:55.786 mempool/cnxk: not in enabled drivers build config 00:01:55.786 mempool/dpaa: not in enabled drivers build config 00:01:55.786 mempool/dpaa2: not in enabled drivers build config 00:01:55.786 mempool/octeontx: not in enabled drivers build config 00:01:55.786 mempool/stack: not in enabled drivers build config 00:01:55.786 dma/cnxk: not in enabled drivers build config 00:01:55.786 dma/dpaa: not in enabled drivers build config 00:01:55.786 dma/dpaa2: not in enabled drivers build config 00:01:55.786 dma/hisilicon: not in enabled drivers build config 00:01:55.786 dma/idxd: not in enabled drivers build config 00:01:55.786 dma/ioat: not in enabled drivers build config 00:01:55.786 dma/skeleton: not in enabled drivers build config 00:01:55.786 net/af_packet: not in enabled drivers build config 00:01:55.786 net/af_xdp: not in enabled drivers build config 00:01:55.786 net/ark: not in enabled drivers build config 00:01:55.786 net/atlantic: not in enabled drivers build config 00:01:55.786 net/avp: not in enabled drivers build config 00:01:55.786 net/axgbe: not in enabled drivers build config 00:01:55.786 net/bnx2x: not in enabled drivers build config 00:01:55.786 net/bnxt: not in enabled drivers build config 00:01:55.786 net/bonding: not in enabled drivers build config 00:01:55.786 net/cnxk: not in enabled drivers build config 00:01:55.786 net/cpfl: not in enabled drivers build config 00:01:55.786 net/cxgbe: not in enabled drivers build config 00:01:55.786 net/dpaa: not in enabled drivers build config 00:01:55.786 net/dpaa2: not in enabled drivers build config 00:01:55.786 net/e1000: not in enabled drivers build config 00:01:55.786 net/ena: not in enabled drivers build config 00:01:55.786 net/enetc: not in enabled drivers build config 00:01:55.786 net/enetfec: not in enabled drivers build config 00:01:55.786 net/enic: not in enabled drivers build config 00:01:55.786 net/failsafe: not in enabled drivers build config 00:01:55.786 net/fm10k: not in enabled drivers build config 00:01:55.786 net/gve: not in enabled drivers build config 00:01:55.786 net/hinic: not in enabled drivers build config 00:01:55.786 net/hns3: not in enabled drivers build config 00:01:55.786 net/i40e: not in enabled drivers build config 00:01:55.786 net/iavf: not in enabled drivers build config 00:01:55.786 net/ice: not in enabled drivers build config 00:01:55.786 net/idpf: not in enabled drivers build config 00:01:55.786 net/igc: not in enabled drivers build config 00:01:55.786 net/ionic: not in enabled drivers build config 00:01:55.786 net/ipn3ke: not in enabled drivers build config 00:01:55.786 net/ixgbe: not in enabled drivers build config 00:01:55.786 net/mana: not in enabled drivers build config 00:01:55.786 net/memif: not in enabled drivers build config 00:01:55.786 net/mlx4: not in enabled drivers build config 00:01:55.786 net/mlx5: not in enabled drivers build config 00:01:55.786 net/mvneta: not in enabled drivers build config 00:01:55.786 net/mvpp2: not in enabled drivers build config 00:01:55.786 net/netvsc: not in enabled drivers build config 00:01:55.786 net/nfb: not in enabled drivers build config 00:01:55.786 net/nfp: not in enabled drivers build config 00:01:55.786 net/ngbe: not in enabled drivers build config 00:01:55.786 net/null: not in enabled drivers build config 00:01:55.786 net/octeontx: not in enabled drivers build config 00:01:55.786 net/octeon_ep: not in enabled drivers build config 00:01:55.786 net/pcap: not in enabled drivers build config 00:01:55.786 net/pfe: not in enabled drivers build config 00:01:55.786 net/qede: not in enabled drivers build config 00:01:55.786 net/ring: not in enabled drivers build config 00:01:55.786 net/sfc: not in enabled drivers build config 00:01:55.786 net/softnic: not in enabled drivers build config 00:01:55.786 net/tap: not in enabled drivers build config 00:01:55.786 net/thunderx: not in enabled drivers build config 00:01:55.786 net/txgbe: not in enabled drivers build config 00:01:55.786 net/vdev_netvsc: not in enabled drivers build config 00:01:55.786 net/vhost: not in enabled drivers build config 00:01:55.786 net/virtio: not in enabled drivers build config 00:01:55.786 net/vmxnet3: not in enabled drivers build config 00:01:55.786 raw/*: missing internal dependency, "rawdev" 00:01:55.786 crypto/armv8: not in enabled drivers build config 00:01:55.786 crypto/bcmfs: not in enabled drivers build config 00:01:55.786 crypto/caam_jr: not in enabled drivers build config 00:01:55.786 crypto/ccp: not in enabled drivers build config 00:01:55.786 crypto/cnxk: not in enabled drivers build config 00:01:55.786 crypto/dpaa_sec: not in enabled drivers build config 00:01:55.786 crypto/dpaa2_sec: not in enabled drivers build config 00:01:55.786 crypto/mvsam: not in enabled drivers build config 00:01:55.786 crypto/nitrox: not in enabled drivers build config 00:01:55.786 crypto/null: not in enabled drivers build config 00:01:55.786 crypto/octeontx: not in enabled drivers build config 00:01:55.786 crypto/openssl: not in enabled drivers build config 00:01:55.786 crypto/scheduler: not in enabled drivers build config 00:01:55.786 crypto/uadk: not in enabled drivers build config 00:01:55.786 crypto/virtio: not in enabled drivers build config 00:01:55.786 compress/nitrox: not in enabled drivers build config 00:01:55.786 compress/octeontx: not in enabled drivers build config 00:01:55.786 compress/zlib: not in enabled drivers build config 00:01:55.786 regex/*: missing internal dependency, "regexdev" 00:01:55.786 ml/*: missing internal dependency, "mldev" 00:01:55.786 vdpa/ifc: not in enabled drivers build config 00:01:55.786 vdpa/mlx5: not in enabled drivers build config 00:01:55.786 vdpa/nfp: not in enabled drivers build config 00:01:55.786 vdpa/sfc: not in enabled drivers build config 00:01:55.786 event/*: missing internal dependency, "eventdev" 00:01:55.786 baseband/*: missing internal dependency, "bbdev" 00:01:55.786 gpu/*: missing internal dependency, "gpudev" 00:01:55.786 00:01:55.786 00:01:56.101 Build targets in project: 115 00:01:56.101 00:01:56.101 DPDK 24.03.0 00:01:56.101 00:01:56.101 User defined options 00:01:56.101 buildtype : debug 00:01:56.101 default_library : shared 00:01:56.101 libdir : lib 00:01:56.101 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:56.101 b_sanitize : address 00:01:56.101 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isalbuild -fPIC -Werror 00:01:56.101 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:01:56.101 cpu_instruction_set: native 00:01:56.101 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:56.101 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:56.101 enable_docs : false 00:01:56.101 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:01:56.101 enable_kmods : false 00:01:56.101 max_lcores : 128 00:01:56.101 tests : false 00:01:56.101 00:01:56.101 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.684 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:01:56.684 [1/378] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:56.684 [2/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:56.684 [3/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:56.684 [4/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:56.684 [5/378] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:56.684 [6/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:56.684 [7/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:56.943 [8/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:56.943 [9/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:56.943 [10/378] Linking static target lib/librte_kvargs.a 00:01:56.943 [11/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:56.943 [12/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:56.943 [13/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:56.943 [14/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:56.943 [15/378] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:56.943 [16/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:56.943 [17/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:56.943 [18/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:56.943 [19/378] Linking static target lib/librte_log.a 00:01:56.943 [20/378] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.943 [21/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.943 [22/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.943 [23/378] Linking static target lib/librte_pci.a 00:01:56.943 [24/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.943 [25/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.943 [26/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.943 [27/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.943 [28/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.943 [29/378] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:56.943 [30/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:57.202 [31/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:57.202 [32/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:57.202 [33/378] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:57.202 [34/378] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.202 [35/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:57.468 [36/378] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:57.468 [37/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.468 [38/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:57.468 [39/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:57.468 [40/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:57.468 [41/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:57.468 [42/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:57.468 [43/378] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.468 [44/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:57.468 [45/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:57.468 [46/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:57.468 [47/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:57.468 [48/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:57.468 [49/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:57.468 [50/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:57.468 [51/378] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.468 [52/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:57.468 [53/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:57.468 [54/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:57.468 [55/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:57.468 [56/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:57.468 [57/378] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:57.468 [58/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:57.468 [59/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:57.468 [60/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:57.468 [61/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:57.468 [62/378] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:57.468 [63/378] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:57.468 [64/378] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:57.468 [65/378] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:57.468 [66/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:57.468 [67/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:57.468 [68/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:57.468 [69/378] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:57.468 [70/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:57.468 [71/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:57.469 [72/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:57.469 [73/378] Linking static target lib/librte_meter.a 00:01:57.469 [74/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:57.469 [75/378] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:57.469 [76/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:57.469 [77/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:57.469 [78/378] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:57.469 [79/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:57.469 [80/378] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:57.469 [81/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:57.469 [82/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:57.469 [83/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:57.469 [84/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:57.469 [85/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:57.469 [86/378] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:57.469 [87/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:57.469 [88/378] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:57.469 [89/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:57.469 [90/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:57.469 [91/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:57.469 [92/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:57.469 [93/378] Linking static target lib/librte_ring.a 00:01:57.469 [94/378] Linking static target lib/librte_telemetry.a 00:01:57.469 [95/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:57.469 [96/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:57.731 [97/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:57.731 [98/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:57.731 [99/378] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:57.731 [100/378] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:57.731 [101/378] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:57.731 [102/378] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:57.731 [103/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:57.731 [104/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:01:57.731 [105/378] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:57.731 [106/378] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:57.731 [107/378] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:57.732 [108/378] Linking static target lib/librte_cmdline.a 00:01:57.732 [109/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:57.732 [110/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:57.732 [111/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:57.732 [112/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:57.732 [113/378] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:57.732 [114/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:57.732 [115/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:57.732 [116/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:01:57.732 [117/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:57.732 [118/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:57.732 [119/378] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:57.732 [120/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:57.732 [121/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:57.732 [122/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:57.732 [123/378] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:57.732 [124/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:57.732 [125/378] Linking static target lib/librte_mempool.a 00:01:57.732 [126/378] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:57.732 [127/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:57.732 [128/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:57.732 [129/378] Linking static target lib/librte_timer.a 00:01:57.732 [130/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:57.732 [131/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:57.732 [132/378] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:57.732 [133/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:57.732 [134/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:57.732 [135/378] Linking static target lib/librte_dmadev.a 00:01:57.990 [136/378] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.990 [137/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:57.990 [138/378] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:57.990 [139/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:57.990 [140/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:57.990 [141/378] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.990 [142/378] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:57.990 [143/378] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.990 [144/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:57.990 [145/378] Linking static target lib/librte_net.a 00:01:57.990 [146/378] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:57.990 [147/378] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:57.990 [148/378] Linking static target lib/librte_rcu.a 00:01:57.990 [149/378] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.991 [150/378] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.991 [151/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:57.991 [152/378] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.991 [153/378] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:57.991 [154/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:01:57.991 [155/378] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:57.991 [156/378] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:57.991 [157/378] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.248 [158/378] Linking static target lib/librte_compressdev.a 00:01:58.248 [159/378] Linking target lib/librte_log.so.24.1 00:01:58.248 [160/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:01:58.248 [161/378] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.248 [162/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:58.248 [163/378] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:01:58.248 [164/378] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:01:58.248 [165/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:58.248 [166/378] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:58.248 [167/378] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:58.248 [168/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen5.c.o 00:01:58.248 [169/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:58.248 [170/378] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.248 [171/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:58.248 [172/378] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.248 [173/378] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:58.248 [174/378] Linking static target lib/librte_power.a 00:01:58.248 [175/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:58.248 [176/378] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:58.248 [177/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:58.248 [178/378] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.248 [179/378] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.248 [180/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:58.507 [181/378] Linking target lib/librte_kvargs.so.24.1 00:01:58.507 [182/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:01:58.507 [183/378] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.507 [184/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:01:58.507 [185/378] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:58.507 [186/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen_lce.c.o 00:01:58.507 [187/378] Linking target lib/librte_telemetry.so.24.1 00:01:58.507 [188/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:01:58.507 [189/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:01:58.507 [190/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:01:58.507 [191/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:01:58.507 [192/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen5.c.o 00:01:58.507 [193/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:01:58.507 [194/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:01:58.507 [195/378] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.507 [196/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:01:58.507 [197/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:01:58.507 [198/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:01:58.507 [199/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:01:58.507 [200/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:01:58.507 [201/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:01:58.507 [202/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:01:58.507 [203/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:01:58.507 [204/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:01:58.507 [205/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:01:58.507 [206/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen5.c.o 00:01:58.507 [207/378] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.507 [208/378] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:01:58.507 [209/378] Linking static target lib/librte_reorder.a 00:01:58.507 [210/378] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:58.507 [211/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:01:58.507 [212/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:01:58.507 [213/378] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:58.507 [214/378] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:58.507 [215/378] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:58.507 [216/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:01:58.507 [217/378] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.507 [218/378] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:58.507 [219/378] Compiling C object drivers/librte_bus_auxiliary.so.24.1.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:58.507 [220/378] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.507 [221/378] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:58.507 [222/378] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.507 [223/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:01:58.507 [224/378] Linking static target drivers/librte_bus_auxiliary.a 00:01:58.507 [225/378] Linking static target drivers/librte_bus_vdev.a 00:01:58.508 [226/378] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:58.508 [227/378] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:58.508 [228/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:01:58.508 [229/378] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:58.508 [230/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:01:58.508 [231/378] Linking static target lib/librte_security.a 00:01:58.508 [232/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:01:58.766 [233/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:01:58.766 [234/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen_lce.c.o 00:01:58.766 [235/378] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:58.766 [236/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:01:58.766 [237/378] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.766 [238/378] Linking static target lib/librte_eal.a 00:01:58.766 [239/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:01:58.766 [240/378] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.766 [241/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:01:58.766 [242/378] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:01:58.766 [243/378] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:01:58.766 [244/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:01:58.766 [245/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:01:58.766 [246/378] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:58.766 [247/378] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:01:58.766 [248/378] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.766 [249/378] Linking static target drivers/libtmp_rte_compress_isal.a 00:01:58.766 [250/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:01:58.766 [251/378] Linking static target lib/librte_mbuf.a 00:01:58.766 [252/378] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.767 [253/378] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.767 [254/378] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:58.767 [255/378] Linking static target drivers/librte_bus_pci.a 00:01:58.767 [256/378] Linking static target lib/librte_hash.a 00:01:59.024 [257/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.024 [258/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:59.024 [259/378] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.024 [260/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:01:59.024 [261/378] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:59.024 [262/378] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:59.024 [263/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:01:59.024 [264/378] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.024 [265/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:01:59.024 [266/378] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.024 [267/378] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.024 [268/378] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:01:59.024 [269/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:01:59.024 [270/378] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:01:59.024 [271/378] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:01:59.024 [272/378] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:01:59.024 [273/378] Compiling C object drivers/librte_compress_mlx5.so.24.1.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:59.024 [274/378] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:59.024 [275/378] Linking static target drivers/librte_compress_mlx5.a 00:01:59.024 [276/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:01:59.024 [277/378] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:59.024 [278/378] Compiling C object drivers/librte_compress_isal.so.24.1.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:59.024 [279/378] Linking static target drivers/librte_compress_isal.a 00:01:59.281 [280/378] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:59.281 [281/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:01:59.281 [282/378] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.281 [283/378] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:59.281 [284/378] Linking static target drivers/librte_mempool_ring.a 00:01:59.281 [285/378] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.281 [286/378] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.281 [287/378] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:01:59.281 [288/378] Linking static target lib/librte_cryptodev.a 00:01:59.281 [289/378] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.281 [290/378] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:59.281 [291/378] Compiling C object drivers/librte_crypto_mlx5.so.24.1.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:59.282 [292/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:01:59.282 [293/378] Linking static target drivers/librte_crypto_mlx5.a 00:01:59.282 [294/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:59.282 [295/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:01:59.539 [296/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:01:59.539 [297/378] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:01:59.539 [298/378] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:01:59.539 [299/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:01:59.539 [300/378] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:01:59.539 [301/378] Linking static target drivers/libtmp_rte_common_mlx5.a 00:01:59.798 [302/378] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.798 [303/378] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:01:59.798 [304/378] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.798 [305/378] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:59.798 [306/378] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.1.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:59.798 [307/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:01:59.798 [308/378] Linking static target drivers/librte_crypto_ipsec_mb.a 00:01:59.798 [309/378] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:01:59.798 [310/378] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.798 [311/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:02:00.056 [312/378] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:02:00.056 [313/378] Compiling C object drivers/librte_common_mlx5.so.24.1.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:02:00.056 [314/378] Linking static target drivers/librte_common_mlx5.a 00:02:00.056 [315/378] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:00.056 [316/378] Linking static target lib/librte_ethdev.a 00:02:01.431 [317/378] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.431 [318/378] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.837 [319/378] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:02:02.837 [320/378] Linking static target drivers/libtmp_rte_common_qat.a 00:02:02.837 [321/378] Generating drivers/rte_common_qat.pmd.c with a custom command 00:02:03.095 [322/378] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:03.095 [323/378] Compiling C object drivers/librte_common_qat.so.24.1.p/meson-generated_.._rte_common_qat.pmd.c.o 00:02:03.095 [324/378] Linking static target drivers/librte_common_qat.a 00:02:04.994 [325/378] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:04.994 [326/378] Linking static target lib/librte_vhost.a 00:02:05.960 [327/378] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.889 [328/378] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.202 [329/378] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.104 [330/378] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.105 [331/378] Linking target lib/librte_eal.so.24.1 00:02:12.105 [332/378] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:12.105 [333/378] Linking target lib/librte_dmadev.so.24.1 00:02:12.105 [334/378] Linking target lib/librte_meter.so.24.1 00:02:12.105 [335/378] Linking target lib/librte_timer.so.24.1 00:02:12.105 [336/378] Linking target lib/librte_ring.so.24.1 00:02:12.105 [337/378] Linking target lib/librte_pci.so.24.1 00:02:12.105 [338/378] Linking target drivers/librte_bus_auxiliary.so.24.1 00:02:12.105 [339/378] Linking target drivers/librte_bus_vdev.so.24.1 00:02:12.363 [340/378] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:12.363 [341/378] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:12.363 [342/378] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:12.363 [343/378] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:12.363 [344/378] Generating symbol file drivers/librte_bus_auxiliary.so.24.1.p/librte_bus_auxiliary.so.24.1.symbols 00:02:12.363 [345/378] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:12.363 [346/378] Generating symbol file drivers/librte_bus_vdev.so.24.1.p/librte_bus_vdev.so.24.1.symbols 00:02:12.363 [347/378] Linking target drivers/librte_bus_pci.so.24.1 00:02:12.363 [348/378] Linking target lib/librte_mempool.so.24.1 00:02:12.363 [349/378] Linking target lib/librte_rcu.so.24.1 00:02:12.620 [350/378] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:12.620 [351/378] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:12.620 [352/378] Generating symbol file drivers/librte_bus_pci.so.24.1.p/librte_bus_pci.so.24.1.symbols 00:02:12.620 [353/378] Linking target drivers/librte_mempool_ring.so.24.1 00:02:12.620 [354/378] Linking target lib/librte_mbuf.so.24.1 00:02:12.620 [355/378] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:12.878 [356/378] Linking target lib/librte_net.so.24.1 00:02:12.878 [357/378] Linking target lib/librte_compressdev.so.24.1 00:02:12.878 [358/378] Linking target lib/librte_reorder.so.24.1 00:02:12.878 [359/378] Linking target lib/librte_cryptodev.so.24.1 00:02:12.878 [360/378] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:12.878 [361/378] Generating symbol file lib/librte_compressdev.so.24.1.p/librte_compressdev.so.24.1.symbols 00:02:12.878 [362/378] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:12.878 [363/378] Linking target lib/librte_cmdline.so.24.1 00:02:12.878 [364/378] Linking target lib/librte_hash.so.24.1 00:02:12.878 [365/378] Linking target lib/librte_security.so.24.1 00:02:13.135 [366/378] Linking target drivers/librte_compress_isal.so.24.1 00:02:13.135 [367/378] Linking target lib/librte_ethdev.so.24.1 00:02:13.135 [368/378] Generating symbol file lib/librte_security.so.24.1.p/librte_security.so.24.1.symbols 00:02:13.135 [369/378] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:13.135 [370/378] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:13.135 [371/378] Linking target drivers/librte_common_mlx5.so.24.1 00:02:13.135 [372/378] Linking target lib/librte_power.so.24.1 00:02:13.393 [373/378] Linking target lib/librte_vhost.so.24.1 00:02:13.393 [374/378] Linking target drivers/librte_crypto_ipsec_mb.so.24.1 00:02:13.393 [375/378] Generating symbol file drivers/librte_common_mlx5.so.24.1.p/librte_common_mlx5.so.24.1.symbols 00:02:13.393 [376/378] Linking target drivers/librte_compress_mlx5.so.24.1 00:02:13.393 [377/378] Linking target drivers/librte_crypto_mlx5.so.24.1 00:02:13.393 [378/378] Linking target drivers/librte_common_qat.so.24.1 00:02:13.393 INFO: autodetecting backend as ninja 00:02:13.393 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:14.765 CC lib/ut/ut.o 00:02:14.765 CC lib/ut_mock/mock.o 00:02:14.765 CC lib/log/log.o 00:02:14.765 CC lib/log/log_flags.o 00:02:14.765 CC lib/log/log_deprecated.o 00:02:15.023 LIB libspdk_ut.a 00:02:15.023 LIB libspdk_log.a 00:02:15.023 SO libspdk_ut.so.2.0 00:02:15.023 LIB libspdk_ut_mock.a 00:02:15.023 SO libspdk_log.so.7.0 00:02:15.023 SO libspdk_ut_mock.so.6.0 00:02:15.023 SYMLINK libspdk_ut.so 00:02:15.023 SYMLINK libspdk_log.so 00:02:15.023 SYMLINK libspdk_ut_mock.so 00:02:15.589 CXX lib/trace_parser/trace.o 00:02:15.589 CC lib/ioat/ioat.o 00:02:15.589 CC lib/util/base64.o 00:02:15.589 CC lib/util/bit_array.o 00:02:15.589 CC lib/util/cpuset.o 00:02:15.589 CC lib/util/crc16.o 00:02:15.589 CC lib/util/crc32.o 00:02:15.589 CC lib/util/crc32c.o 00:02:15.589 CC lib/util/crc32_ieee.o 00:02:15.589 CC lib/util/crc64.o 00:02:15.589 CC lib/dma/dma.o 00:02:15.589 CC lib/util/dif.o 00:02:15.589 CC lib/util/fd.o 00:02:15.589 CC lib/util/fd_group.o 00:02:15.589 CC lib/util/file.o 00:02:15.589 CC lib/util/hexlify.o 00:02:15.589 CC lib/util/net.o 00:02:15.589 CC lib/util/iov.o 00:02:15.589 CC lib/util/math.o 00:02:15.589 CC lib/util/pipe.o 00:02:15.589 CC lib/util/strerror_tls.o 00:02:15.589 CC lib/util/xor.o 00:02:15.589 CC lib/util/string.o 00:02:15.589 CC lib/util/uuid.o 00:02:15.589 CC lib/util/zipf.o 00:02:15.589 CC lib/vfio_user/host/vfio_user.o 00:02:15.589 CC lib/vfio_user/host/vfio_user_pci.o 00:02:15.847 LIB libspdk_dma.a 00:02:15.847 SO libspdk_dma.so.4.0 00:02:15.847 LIB libspdk_ioat.a 00:02:15.847 SYMLINK libspdk_dma.so 00:02:15.847 SO libspdk_ioat.so.7.0 00:02:15.847 LIB libspdk_vfio_user.a 00:02:15.847 SYMLINK libspdk_ioat.so 00:02:15.847 SO libspdk_vfio_user.so.5.0 00:02:16.106 SYMLINK libspdk_vfio_user.so 00:02:16.106 LIB libspdk_util.a 00:02:16.364 SO libspdk_util.so.10.0 00:02:16.364 SYMLINK libspdk_util.so 00:02:16.364 LIB libspdk_trace_parser.a 00:02:16.622 SO libspdk_trace_parser.so.5.0 00:02:16.622 SYMLINK libspdk_trace_parser.so 00:02:16.879 CC lib/idxd/idxd.o 00:02:16.879 CC lib/idxd/idxd_kernel.o 00:02:16.879 CC lib/idxd/idxd_user.o 00:02:16.879 CC lib/conf/conf.o 00:02:16.879 CC lib/vmd/vmd.o 00:02:16.879 CC lib/vmd/led.o 00:02:16.879 CC lib/json/json_util.o 00:02:16.879 CC lib/rdma_provider/common.o 00:02:16.879 CC lib/json/json_parse.o 00:02:16.879 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:16.879 CC lib/json/json_write.o 00:02:16.879 CC lib/rdma_utils/rdma_utils.o 00:02:16.879 CC lib/env_dpdk/env.o 00:02:16.879 CC lib/env_dpdk/memory.o 00:02:16.879 CC lib/reduce/reduce.o 00:02:16.879 CC lib/env_dpdk/pci.o 00:02:16.879 CC lib/env_dpdk/threads.o 00:02:16.879 CC lib/env_dpdk/init.o 00:02:16.879 CC lib/env_dpdk/pci_ioat.o 00:02:16.879 CC lib/env_dpdk/pci_virtio.o 00:02:16.879 CC lib/env_dpdk/pci_vmd.o 00:02:16.879 CC lib/env_dpdk/pci_idxd.o 00:02:16.879 CC lib/env_dpdk/pci_event.o 00:02:16.879 CC lib/env_dpdk/sigbus_handler.o 00:02:16.879 CC lib/env_dpdk/pci_dpdk.o 00:02:16.879 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:16.879 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:17.135 LIB libspdk_rdma_provider.a 00:02:17.135 LIB libspdk_conf.a 00:02:17.135 SO libspdk_rdma_provider.so.6.0 00:02:17.135 SO libspdk_conf.so.6.0 00:02:17.135 LIB libspdk_rdma_utils.a 00:02:17.135 LIB libspdk_json.a 00:02:17.135 SYMLINK libspdk_rdma_provider.so 00:02:17.135 SYMLINK libspdk_conf.so 00:02:17.135 SO libspdk_rdma_utils.so.1.0 00:02:17.135 SO libspdk_json.so.6.0 00:02:17.392 SYMLINK libspdk_rdma_utils.so 00:02:17.392 SYMLINK libspdk_json.so 00:02:17.648 LIB libspdk_idxd.a 00:02:17.648 SO libspdk_idxd.so.12.0 00:02:17.648 LIB libspdk_vmd.a 00:02:17.648 SYMLINK libspdk_idxd.so 00:02:17.648 SO libspdk_vmd.so.6.0 00:02:17.648 LIB libspdk_reduce.a 00:02:17.648 CC lib/jsonrpc/jsonrpc_server.o 00:02:17.648 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:17.648 CC lib/jsonrpc/jsonrpc_client.o 00:02:17.648 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:17.648 SO libspdk_reduce.so.6.1 00:02:17.905 SYMLINK libspdk_vmd.so 00:02:17.905 SYMLINK libspdk_reduce.so 00:02:17.905 LIB libspdk_jsonrpc.a 00:02:18.161 SO libspdk_jsonrpc.so.6.0 00:02:18.161 SYMLINK libspdk_jsonrpc.so 00:02:18.418 CC lib/rpc/rpc.o 00:02:18.675 LIB libspdk_env_dpdk.a 00:02:18.675 LIB libspdk_rpc.a 00:02:18.933 SO libspdk_env_dpdk.so.15.0 00:02:18.933 SO libspdk_rpc.so.6.0 00:02:18.933 SYMLINK libspdk_rpc.so 00:02:18.933 SYMLINK libspdk_env_dpdk.so 00:02:19.190 CC lib/notify/notify.o 00:02:19.190 CC lib/notify/notify_rpc.o 00:02:19.190 CC lib/keyring/keyring.o 00:02:19.190 CC lib/keyring/keyring_rpc.o 00:02:19.190 CC lib/trace/trace.o 00:02:19.190 CC lib/trace/trace_flags.o 00:02:19.190 CC lib/trace/trace_rpc.o 00:02:19.448 LIB libspdk_keyring.a 00:02:19.448 SO libspdk_keyring.so.1.0 00:02:19.448 LIB libspdk_trace.a 00:02:19.448 LIB libspdk_notify.a 00:02:19.705 SO libspdk_notify.so.6.0 00:02:19.705 SO libspdk_trace.so.10.0 00:02:19.705 SYMLINK libspdk_keyring.so 00:02:19.705 SYMLINK libspdk_notify.so 00:02:19.705 SYMLINK libspdk_trace.so 00:02:19.962 CC lib/sock/sock.o 00:02:19.962 CC lib/sock/sock_rpc.o 00:02:19.962 CC lib/thread/thread.o 00:02:19.962 CC lib/thread/iobuf.o 00:02:20.525 LIB libspdk_sock.a 00:02:20.525 SO libspdk_sock.so.10.0 00:02:20.822 SYMLINK libspdk_sock.so 00:02:21.107 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:21.107 CC lib/nvme/nvme_ctrlr.o 00:02:21.107 CC lib/nvme/nvme_fabric.o 00:02:21.107 CC lib/nvme/nvme_ns_cmd.o 00:02:21.107 CC lib/nvme/nvme_ns.o 00:02:21.107 CC lib/nvme/nvme_pcie_common.o 00:02:21.107 CC lib/nvme/nvme_pcie.o 00:02:21.107 CC lib/nvme/nvme_qpair.o 00:02:21.107 CC lib/nvme/nvme.o 00:02:21.107 CC lib/nvme/nvme_quirks.o 00:02:21.107 CC lib/nvme/nvme_transport.o 00:02:21.107 CC lib/nvme/nvme_discovery.o 00:02:21.107 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:21.107 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:21.107 CC lib/nvme/nvme_tcp.o 00:02:21.107 CC lib/nvme/nvme_opal.o 00:02:21.107 CC lib/nvme/nvme_io_msg.o 00:02:21.107 CC lib/nvme/nvme_poll_group.o 00:02:21.107 CC lib/nvme/nvme_zns.o 00:02:21.107 CC lib/nvme/nvme_stubs.o 00:02:21.107 CC lib/nvme/nvme_auth.o 00:02:21.107 CC lib/nvme/nvme_cuse.o 00:02:21.107 CC lib/nvme/nvme_rdma.o 00:02:22.036 LIB libspdk_thread.a 00:02:22.036 SO libspdk_thread.so.10.1 00:02:22.036 SYMLINK libspdk_thread.so 00:02:22.293 CC lib/blob/blobstore.o 00:02:22.293 CC lib/blob/blob_bs_dev.o 00:02:22.293 CC lib/blob/request.o 00:02:22.293 CC lib/blob/zeroes.o 00:02:22.293 CC lib/virtio/virtio.o 00:02:22.293 CC lib/virtio/virtio_vhost_user.o 00:02:22.293 CC lib/init/json_config.o 00:02:22.293 CC lib/virtio/virtio_vfio_user.o 00:02:22.293 CC lib/virtio/virtio_pci.o 00:02:22.293 CC lib/init/subsystem.o 00:02:22.293 CC lib/accel/accel.o 00:02:22.293 CC lib/init/subsystem_rpc.o 00:02:22.293 CC lib/accel/accel_sw.o 00:02:22.293 CC lib/init/rpc.o 00:02:22.293 CC lib/accel/accel_rpc.o 00:02:22.855 LIB libspdk_init.a 00:02:22.855 SO libspdk_init.so.5.0 00:02:22.855 LIB libspdk_virtio.a 00:02:22.855 SYMLINK libspdk_init.so 00:02:22.855 SO libspdk_virtio.so.7.0 00:02:22.855 SYMLINK libspdk_virtio.so 00:02:23.112 CC lib/event/app.o 00:02:23.112 CC lib/event/reactor.o 00:02:23.112 CC lib/event/log_rpc.o 00:02:23.112 CC lib/event/app_rpc.o 00:02:23.112 CC lib/event/scheduler_static.o 00:02:23.681 LIB libspdk_nvme.a 00:02:23.681 LIB libspdk_accel.a 00:02:23.681 SO libspdk_accel.so.16.0 00:02:23.681 SO libspdk_nvme.so.13.1 00:02:23.681 SYMLINK libspdk_accel.so 00:02:23.938 LIB libspdk_event.a 00:02:23.938 SO libspdk_event.so.14.0 00:02:23.938 SYMLINK libspdk_event.so 00:02:24.194 CC lib/bdev/bdev.o 00:02:24.194 CC lib/bdev/bdev_rpc.o 00:02:24.194 CC lib/bdev/bdev_zone.o 00:02:24.194 CC lib/bdev/part.o 00:02:24.194 CC lib/bdev/scsi_nvme.o 00:02:24.194 SYMLINK libspdk_nvme.so 00:02:26.714 LIB libspdk_blob.a 00:02:26.971 SO libspdk_blob.so.11.0 00:02:26.971 SYMLINK libspdk_blob.so 00:02:27.228 LIB libspdk_bdev.a 00:02:27.228 SO libspdk_bdev.so.16.0 00:02:27.228 CC lib/blobfs/blobfs.o 00:02:27.228 CC lib/blobfs/tree.o 00:02:27.491 CC lib/lvol/lvol.o 00:02:27.491 SYMLINK libspdk_bdev.so 00:02:27.753 CC lib/ublk/ublk.o 00:02:27.753 CC lib/ublk/ublk_rpc.o 00:02:27.753 CC lib/nbd/nbd.o 00:02:27.753 CC lib/nbd/nbd_rpc.o 00:02:27.753 CC lib/nvmf/ctrlr_bdev.o 00:02:27.753 CC lib/nvmf/ctrlr.o 00:02:27.753 CC lib/nvmf/ctrlr_discovery.o 00:02:27.753 CC lib/nvmf/subsystem.o 00:02:27.753 CC lib/nvmf/nvmf.o 00:02:27.753 CC lib/nvmf/tcp.o 00:02:27.753 CC lib/nvmf/nvmf_rpc.o 00:02:27.753 CC lib/nvmf/transport.o 00:02:27.753 CC lib/nvmf/mdns_server.o 00:02:27.753 CC lib/nvmf/stubs.o 00:02:27.753 CC lib/nvmf/rdma.o 00:02:27.753 CC lib/nvmf/auth.o 00:02:27.753 CC lib/ftl/ftl_core.o 00:02:27.753 CC lib/scsi/dev.o 00:02:27.753 CC lib/scsi/lun.o 00:02:27.753 CC lib/ftl/ftl_init.o 00:02:27.753 CC lib/ftl/ftl_layout.o 00:02:27.753 CC lib/scsi/port.o 00:02:27.753 CC lib/ftl/ftl_debug.o 00:02:27.753 CC lib/scsi/scsi.o 00:02:27.753 CC lib/ftl/ftl_io.o 00:02:27.753 CC lib/scsi/scsi_bdev.o 00:02:27.753 CC lib/ftl/ftl_sb.o 00:02:27.753 CC lib/scsi/scsi_pr.o 00:02:27.753 CC lib/ftl/ftl_l2p.o 00:02:27.753 CC lib/scsi/scsi_rpc.o 00:02:27.753 CC lib/ftl/ftl_l2p_flat.o 00:02:27.753 CC lib/scsi/task.o 00:02:27.753 CC lib/ftl/ftl_nv_cache.o 00:02:27.753 CC lib/ftl/ftl_band.o 00:02:27.753 CC lib/ftl/ftl_band_ops.o 00:02:27.753 CC lib/ftl/ftl_writer.o 00:02:27.753 CC lib/ftl/ftl_rq.o 00:02:27.753 CC lib/ftl/ftl_reloc.o 00:02:27.753 CC lib/ftl/ftl_l2p_cache.o 00:02:27.753 CC lib/ftl/ftl_p2l.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:28.011 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:28.011 CC lib/ftl/utils/ftl_md.o 00:02:28.011 CC lib/ftl/utils/ftl_conf.o 00:02:28.011 CC lib/ftl/utils/ftl_bitmap.o 00:02:28.011 CC lib/ftl/utils/ftl_mempool.o 00:02:28.011 CC lib/ftl/utils/ftl_property.o 00:02:28.011 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:28.011 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:28.011 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:28.011 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:28.011 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:28.011 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:28.011 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:28.011 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:28.011 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:28.011 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:28.011 CC lib/ftl/base/ftl_base_dev.o 00:02:28.011 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:28.011 CC lib/ftl/base/ftl_base_bdev.o 00:02:28.011 CC lib/ftl/ftl_trace.o 00:02:28.268 LIB libspdk_blobfs.a 00:02:28.524 SO libspdk_blobfs.so.10.0 00:02:28.524 LIB libspdk_nbd.a 00:02:28.524 SYMLINK libspdk_blobfs.so 00:02:28.524 SO libspdk_nbd.so.7.0 00:02:28.524 LIB libspdk_lvol.a 00:02:28.524 SYMLINK libspdk_nbd.so 00:02:28.524 SO libspdk_lvol.so.10.0 00:02:28.524 LIB libspdk_scsi.a 00:02:28.781 SYMLINK libspdk_lvol.so 00:02:28.781 SO libspdk_scsi.so.9.0 00:02:28.781 SYMLINK libspdk_scsi.so 00:02:28.781 LIB libspdk_ublk.a 00:02:28.781 SO libspdk_ublk.so.3.0 00:02:29.039 SYMLINK libspdk_ublk.so 00:02:29.297 CC lib/iscsi/conn.o 00:02:29.297 CC lib/iscsi/init_grp.o 00:02:29.297 CC lib/iscsi/iscsi.o 00:02:29.297 CC lib/iscsi/md5.o 00:02:29.297 CC lib/iscsi/param.o 00:02:29.297 CC lib/iscsi/portal_grp.o 00:02:29.297 CC lib/iscsi/tgt_node.o 00:02:29.297 CC lib/iscsi/iscsi_subsystem.o 00:02:29.297 CC lib/iscsi/iscsi_rpc.o 00:02:29.297 CC lib/iscsi/task.o 00:02:29.297 CC lib/vhost/vhost.o 00:02:29.297 CC lib/vhost/vhost_rpc.o 00:02:29.297 CC lib/vhost/vhost_scsi.o 00:02:29.297 CC lib/vhost/rte_vhost_user.o 00:02:29.297 CC lib/vhost/vhost_blk.o 00:02:29.297 LIB libspdk_ftl.a 00:02:29.297 SO libspdk_ftl.so.9.0 00:02:29.863 SYMLINK libspdk_ftl.so 00:02:30.427 LIB libspdk_vhost.a 00:02:30.427 SO libspdk_vhost.so.8.0 00:02:30.685 SYMLINK libspdk_vhost.so 00:02:30.685 LIB libspdk_nvmf.a 00:02:30.944 SO libspdk_nvmf.so.19.0 00:02:30.944 LIB libspdk_iscsi.a 00:02:30.944 SO libspdk_iscsi.so.8.0 00:02:31.202 SYMLINK libspdk_nvmf.so 00:02:31.202 SYMLINK libspdk_iscsi.so 00:02:31.769 CC module/env_dpdk/env_dpdk_rpc.o 00:02:32.026 CC module/keyring/linux/keyring.o 00:02:32.026 CC module/keyring/linux/keyring_rpc.o 00:02:32.026 CC module/accel/iaa/accel_iaa_rpc.o 00:02:32.026 CC module/accel/iaa/accel_iaa.o 00:02:32.026 CC module/sock/posix/posix.o 00:02:32.026 LIB libspdk_env_dpdk_rpc.a 00:02:32.026 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:32.026 CC module/keyring/file/keyring.o 00:02:32.026 CC module/keyring/file/keyring_rpc.o 00:02:32.026 CC module/accel/ioat/accel_ioat.o 00:02:32.026 CC module/accel/ioat/accel_ioat_rpc.o 00:02:32.026 CC module/blob/bdev/blob_bdev.o 00:02:32.026 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:32.026 CC module/accel/error/accel_error.o 00:02:32.026 CC module/accel/error/accel_error_rpc.o 00:02:32.026 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:32.026 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:32.026 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:02:32.026 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:02:32.026 CC module/scheduler/gscheduler/gscheduler.o 00:02:32.026 CC module/accel/dsa/accel_dsa.o 00:02:32.026 CC module/accel/dsa/accel_dsa_rpc.o 00:02:32.026 SO libspdk_env_dpdk_rpc.so.6.0 00:02:32.026 SYMLINK libspdk_env_dpdk_rpc.so 00:02:32.026 LIB libspdk_keyring_linux.a 00:02:32.026 LIB libspdk_keyring_file.a 00:02:32.026 SO libspdk_keyring_linux.so.1.0 00:02:32.026 LIB libspdk_scheduler_dpdk_governor.a 00:02:32.284 LIB libspdk_accel_iaa.a 00:02:32.284 LIB libspdk_accel_ioat.a 00:02:32.284 LIB libspdk_accel_error.a 00:02:32.284 SO libspdk_keyring_file.so.1.0 00:02:32.284 LIB libspdk_scheduler_dynamic.a 00:02:32.284 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:32.284 SYMLINK libspdk_keyring_linux.so 00:02:32.284 SO libspdk_accel_iaa.so.3.0 00:02:32.284 SO libspdk_accel_ioat.so.6.0 00:02:32.284 SO libspdk_accel_error.so.2.0 00:02:32.284 SO libspdk_scheduler_dynamic.so.4.0 00:02:32.284 SYMLINK libspdk_keyring_file.so 00:02:32.284 LIB libspdk_blob_bdev.a 00:02:32.284 LIB libspdk_accel_dsa.a 00:02:32.284 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:32.284 SYMLINK libspdk_accel_ioat.so 00:02:32.284 SYMLINK libspdk_accel_iaa.so 00:02:32.284 SYMLINK libspdk_accel_error.so 00:02:32.284 SYMLINK libspdk_scheduler_dynamic.so 00:02:32.284 SO libspdk_blob_bdev.so.11.0 00:02:32.284 SO libspdk_accel_dsa.so.5.0 00:02:32.284 LIB libspdk_scheduler_gscheduler.a 00:02:32.284 SO libspdk_scheduler_gscheduler.so.4.0 00:02:32.284 SYMLINK libspdk_blob_bdev.so 00:02:32.284 SYMLINK libspdk_accel_dsa.so 00:02:32.543 SYMLINK libspdk_scheduler_gscheduler.so 00:02:32.801 LIB libspdk_sock_posix.a 00:02:32.801 SO libspdk_sock_posix.so.6.0 00:02:32.801 CC module/bdev/aio/bdev_aio.o 00:02:32.802 CC module/bdev/aio/bdev_aio_rpc.o 00:02:32.802 LIB libspdk_accel_dpdk_compressdev.a 00:02:32.802 CC module/blobfs/bdev/blobfs_bdev.o 00:02:32.802 CC module/bdev/raid/bdev_raid.o 00:02:32.802 CC module/bdev/gpt/gpt.o 00:02:32.802 CC module/bdev/raid/bdev_raid_rpc.o 00:02:32.802 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:32.802 CC module/bdev/null/bdev_null.o 00:02:32.802 CC module/bdev/raid/bdev_raid_sb.o 00:02:32.802 CC module/bdev/null/bdev_null_rpc.o 00:02:32.802 CC module/bdev/gpt/vbdev_gpt.o 00:02:32.802 CC module/bdev/raid/raid1.o 00:02:32.802 CC module/bdev/nvme/bdev_nvme.o 00:02:32.802 CC module/bdev/raid/raid0.o 00:02:32.802 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:32.802 CC module/bdev/raid/concat.o 00:02:32.802 CC module/bdev/nvme/nvme_rpc.o 00:02:32.802 CC module/bdev/nvme/bdev_mdns_client.o 00:02:32.802 CC module/bdev/iscsi/bdev_iscsi.o 00:02:32.802 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:32.802 CC module/bdev/nvme/vbdev_opal.o 00:02:32.802 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:32.802 CC module/bdev/crypto/vbdev_crypto.o 00:02:32.802 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:32.802 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:32.802 CC module/bdev/delay/vbdev_delay.o 00:02:32.802 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:32.802 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:32.802 CC module/bdev/malloc/bdev_malloc.o 00:02:32.802 CC module/bdev/error/vbdev_error.o 00:02:32.802 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:32.802 CC module/bdev/error/vbdev_error_rpc.o 00:02:32.802 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:32.802 CC module/bdev/compress/vbdev_compress_rpc.o 00:02:32.802 CC module/bdev/compress/vbdev_compress.o 00:02:33.059 CC module/bdev/split/vbdev_split_rpc.o 00:02:33.059 CC module/bdev/split/vbdev_split.o 00:02:33.059 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:33.059 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:33.059 CC module/bdev/passthru/vbdev_passthru.o 00:02:33.059 CC module/bdev/lvol/vbdev_lvol.o 00:02:33.059 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:33.059 CC module/bdev/ftl/bdev_ftl.o 00:02:33.059 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:33.059 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:33.059 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:33.059 SO libspdk_accel_dpdk_compressdev.so.3.0 00:02:33.059 SYMLINK libspdk_sock_posix.so 00:02:33.059 SYMLINK libspdk_accel_dpdk_compressdev.so 00:02:33.317 LIB libspdk_blobfs_bdev.a 00:02:33.317 SO libspdk_blobfs_bdev.so.6.0 00:02:33.317 LIB libspdk_bdev_split.a 00:02:33.317 LIB libspdk_bdev_null.a 00:02:33.317 SO libspdk_bdev_split.so.6.0 00:02:33.317 LIB libspdk_bdev_gpt.a 00:02:33.317 SYMLINK libspdk_blobfs_bdev.so 00:02:33.317 LIB libspdk_bdev_error.a 00:02:33.317 SO libspdk_bdev_null.so.6.0 00:02:33.317 LIB libspdk_bdev_ftl.a 00:02:33.317 SO libspdk_bdev_gpt.so.6.0 00:02:33.317 LIB libspdk_bdev_passthru.a 00:02:33.317 SYMLINK libspdk_bdev_split.so 00:02:33.317 SO libspdk_bdev_error.so.6.0 00:02:33.317 LIB libspdk_bdev_zone_block.a 00:02:33.317 LIB libspdk_bdev_aio.a 00:02:33.317 SO libspdk_bdev_ftl.so.6.0 00:02:33.317 SO libspdk_bdev_passthru.so.6.0 00:02:33.317 LIB libspdk_bdev_iscsi.a 00:02:33.317 SYMLINK libspdk_bdev_null.so 00:02:33.575 SO libspdk_bdev_aio.so.6.0 00:02:33.575 SO libspdk_bdev_zone_block.so.6.0 00:02:33.575 SYMLINK libspdk_bdev_gpt.so 00:02:33.575 LIB libspdk_bdev_crypto.a 00:02:33.575 LIB libspdk_bdev_delay.a 00:02:33.575 SYMLINK libspdk_bdev_error.so 00:02:33.575 LIB libspdk_bdev_compress.a 00:02:33.575 LIB libspdk_bdev_malloc.a 00:02:33.575 SO libspdk_bdev_iscsi.so.6.0 00:02:33.575 SYMLINK libspdk_bdev_passthru.so 00:02:33.575 SYMLINK libspdk_bdev_ftl.so 00:02:33.575 SO libspdk_bdev_crypto.so.6.0 00:02:33.575 SO libspdk_bdev_delay.so.6.0 00:02:33.575 SO libspdk_bdev_malloc.so.6.0 00:02:33.575 SO libspdk_bdev_compress.so.6.0 00:02:33.575 SYMLINK libspdk_bdev_zone_block.so 00:02:33.575 SYMLINK libspdk_bdev_aio.so 00:02:33.575 SYMLINK libspdk_bdev_iscsi.so 00:02:33.575 SYMLINK libspdk_bdev_delay.so 00:02:33.575 SYMLINK libspdk_bdev_crypto.so 00:02:33.575 SYMLINK libspdk_bdev_malloc.so 00:02:33.575 LIB libspdk_accel_dpdk_cryptodev.a 00:02:33.576 SYMLINK libspdk_bdev_compress.so 00:02:33.576 LIB libspdk_bdev_lvol.a 00:02:33.576 LIB libspdk_bdev_virtio.a 00:02:33.576 SO libspdk_accel_dpdk_cryptodev.so.3.0 00:02:33.576 SO libspdk_bdev_lvol.so.6.0 00:02:33.576 SO libspdk_bdev_virtio.so.6.0 00:02:33.834 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:33.834 SYMLINK libspdk_bdev_lvol.so 00:02:33.834 SYMLINK libspdk_bdev_virtio.so 00:02:34.093 LIB libspdk_bdev_raid.a 00:02:34.420 SO libspdk_bdev_raid.so.6.0 00:02:34.420 SYMLINK libspdk_bdev_raid.so 00:02:39.694 LIB libspdk_bdev_nvme.a 00:02:39.694 SO libspdk_bdev_nvme.so.7.0 00:02:39.694 SYMLINK libspdk_bdev_nvme.so 00:02:39.953 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:39.953 CC module/event/subsystems/sock/sock.o 00:02:39.953 CC module/event/subsystems/iobuf/iobuf.o 00:02:39.953 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:39.953 CC module/event/subsystems/scheduler/scheduler.o 00:02:39.953 CC module/event/subsystems/vmd/vmd.o 00:02:39.953 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:39.953 CC module/event/subsystems/keyring/keyring.o 00:02:40.212 LIB libspdk_event_sock.a 00:02:40.212 LIB libspdk_event_vmd.a 00:02:40.212 LIB libspdk_event_scheduler.a 00:02:40.212 LIB libspdk_event_keyring.a 00:02:40.212 SO libspdk_event_sock.so.5.0 00:02:40.212 LIB libspdk_event_iobuf.a 00:02:40.212 SO libspdk_event_scheduler.so.4.0 00:02:40.212 SO libspdk_event_vmd.so.6.0 00:02:40.212 SO libspdk_event_keyring.so.1.0 00:02:40.212 SO libspdk_event_iobuf.so.3.0 00:02:40.212 SYMLINK libspdk_event_sock.so 00:02:40.212 SYMLINK libspdk_event_scheduler.so 00:02:40.212 SYMLINK libspdk_event_vmd.so 00:02:40.212 LIB libspdk_event_vhost_blk.a 00:02:40.212 SYMLINK libspdk_event_keyring.so 00:02:40.212 SYMLINK libspdk_event_iobuf.so 00:02:40.470 SO libspdk_event_vhost_blk.so.3.0 00:02:40.470 SYMLINK libspdk_event_vhost_blk.so 00:02:40.730 CC module/event/subsystems/accel/accel.o 00:02:40.730 LIB libspdk_event_accel.a 00:02:40.988 SO libspdk_event_accel.so.6.0 00:02:40.988 SYMLINK libspdk_event_accel.so 00:02:41.246 CC module/event/subsystems/bdev/bdev.o 00:02:41.505 LIB libspdk_event_bdev.a 00:02:41.505 SO libspdk_event_bdev.so.6.0 00:02:41.505 SYMLINK libspdk_event_bdev.so 00:02:42.073 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:42.073 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:42.073 CC module/event/subsystems/scsi/scsi.o 00:02:42.073 CC module/event/subsystems/ublk/ublk.o 00:02:42.073 CC module/event/subsystems/nbd/nbd.o 00:02:42.073 LIB libspdk_event_ublk.a 00:02:42.073 LIB libspdk_event_scsi.a 00:02:42.073 LIB libspdk_event_nbd.a 00:02:42.073 SO libspdk_event_ublk.so.3.0 00:02:42.073 SO libspdk_event_scsi.so.6.0 00:02:42.073 SO libspdk_event_nbd.so.6.0 00:02:42.332 SYMLINK libspdk_event_ublk.so 00:02:42.332 SYMLINK libspdk_event_scsi.so 00:02:42.332 SYMLINK libspdk_event_nbd.so 00:02:42.591 LIB libspdk_event_nvmf.a 00:02:42.591 SO libspdk_event_nvmf.so.6.0 00:02:42.591 SYMLINK libspdk_event_nvmf.so 00:02:42.591 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.591 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.851 LIB libspdk_event_vhost_scsi.a 00:02:42.851 LIB libspdk_event_iscsi.a 00:02:42.851 SO libspdk_event_vhost_scsi.so.3.0 00:02:42.851 SO libspdk_event_iscsi.so.6.0 00:02:42.852 SYMLINK libspdk_event_vhost_scsi.so 00:02:42.852 SYMLINK libspdk_event_iscsi.so 00:02:43.112 SO libspdk.so.6.0 00:02:43.112 SYMLINK libspdk.so 00:02:43.689 CC app/trace_record/trace_record.o 00:02:43.689 CXX app/trace/trace.o 00:02:43.689 CC app/spdk_nvme_perf/perf.o 00:02:43.689 CC app/spdk_nvme_identify/identify.o 00:02:43.689 CC test/rpc_client/rpc_client_test.o 00:02:43.689 CC app/spdk_lspci/spdk_lspci.o 00:02:43.689 CC app/spdk_top/spdk_top.o 00:02:43.689 TEST_HEADER include/spdk/accel.h 00:02:43.689 TEST_HEADER include/spdk/accel_module.h 00:02:43.689 TEST_HEADER include/spdk/assert.h 00:02:43.689 TEST_HEADER include/spdk/barrier.h 00:02:43.689 TEST_HEADER include/spdk/base64.h 00:02:43.689 TEST_HEADER include/spdk/bdev.h 00:02:43.689 TEST_HEADER include/spdk/bdev_module.h 00:02:43.689 TEST_HEADER include/spdk/bdev_zone.h 00:02:43.689 TEST_HEADER include/spdk/bit_array.h 00:02:43.689 CC app/spdk_nvme_discover/discovery_aer.o 00:02:43.689 TEST_HEADER include/spdk/bit_pool.h 00:02:43.689 TEST_HEADER include/spdk/blob_bdev.h 00:02:43.689 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:43.689 TEST_HEADER include/spdk/blobfs.h 00:02:43.689 TEST_HEADER include/spdk/blob.h 00:02:43.689 TEST_HEADER include/spdk/conf.h 00:02:43.689 TEST_HEADER include/spdk/config.h 00:02:43.689 TEST_HEADER include/spdk/cpuset.h 00:02:43.689 TEST_HEADER include/spdk/crc16.h 00:02:43.689 TEST_HEADER include/spdk/crc32.h 00:02:43.689 TEST_HEADER include/spdk/crc64.h 00:02:43.689 TEST_HEADER include/spdk/dif.h 00:02:43.689 TEST_HEADER include/spdk/dma.h 00:02:43.689 TEST_HEADER include/spdk/endian.h 00:02:43.689 TEST_HEADER include/spdk/env.h 00:02:43.689 TEST_HEADER include/spdk/env_dpdk.h 00:02:43.689 TEST_HEADER include/spdk/fd_group.h 00:02:43.689 TEST_HEADER include/spdk/event.h 00:02:43.689 TEST_HEADER include/spdk/fd.h 00:02:43.689 TEST_HEADER include/spdk/file.h 00:02:43.689 TEST_HEADER include/spdk/ftl.h 00:02:43.689 TEST_HEADER include/spdk/gpt_spec.h 00:02:43.689 TEST_HEADER include/spdk/hexlify.h 00:02:43.689 TEST_HEADER include/spdk/histogram_data.h 00:02:43.689 TEST_HEADER include/spdk/idxd.h 00:02:43.689 TEST_HEADER include/spdk/init.h 00:02:43.689 CC app/nvmf_tgt/nvmf_main.o 00:02:43.690 CC app/iscsi_tgt/iscsi_tgt.o 00:02:43.690 TEST_HEADER include/spdk/idxd_spec.h 00:02:43.690 TEST_HEADER include/spdk/ioat.h 00:02:43.690 TEST_HEADER include/spdk/ioat_spec.h 00:02:43.690 TEST_HEADER include/spdk/iscsi_spec.h 00:02:43.690 TEST_HEADER include/spdk/json.h 00:02:43.690 CC app/spdk_dd/spdk_dd.o 00:02:43.690 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:43.690 TEST_HEADER include/spdk/keyring.h 00:02:43.690 TEST_HEADER include/spdk/jsonrpc.h 00:02:43.690 TEST_HEADER include/spdk/keyring_module.h 00:02:43.690 TEST_HEADER include/spdk/likely.h 00:02:43.690 TEST_HEADER include/spdk/log.h 00:02:43.690 TEST_HEADER include/spdk/lvol.h 00:02:43.690 TEST_HEADER include/spdk/memory.h 00:02:43.690 TEST_HEADER include/spdk/mmio.h 00:02:43.690 TEST_HEADER include/spdk/nbd.h 00:02:43.690 TEST_HEADER include/spdk/notify.h 00:02:43.690 TEST_HEADER include/spdk/net.h 00:02:43.690 TEST_HEADER include/spdk/nvme.h 00:02:43.690 TEST_HEADER include/spdk/nvme_intel.h 00:02:43.690 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:43.690 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:43.690 TEST_HEADER include/spdk/nvme_spec.h 00:02:43.690 TEST_HEADER include/spdk/nvme_zns.h 00:02:43.690 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:43.690 TEST_HEADER include/spdk/nvmf.h 00:02:43.690 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:43.690 TEST_HEADER include/spdk/nvmf_spec.h 00:02:43.690 TEST_HEADER include/spdk/nvmf_transport.h 00:02:43.690 TEST_HEADER include/spdk/opal.h 00:02:43.690 TEST_HEADER include/spdk/opal_spec.h 00:02:43.690 TEST_HEADER include/spdk/queue.h 00:02:43.690 TEST_HEADER include/spdk/pci_ids.h 00:02:43.690 TEST_HEADER include/spdk/pipe.h 00:02:43.690 TEST_HEADER include/spdk/reduce.h 00:02:43.690 TEST_HEADER include/spdk/scsi.h 00:02:43.690 TEST_HEADER include/spdk/rpc.h 00:02:43.690 TEST_HEADER include/spdk/scheduler.h 00:02:43.690 TEST_HEADER include/spdk/sock.h 00:02:43.690 TEST_HEADER include/spdk/scsi_spec.h 00:02:43.690 TEST_HEADER include/spdk/stdinc.h 00:02:43.690 TEST_HEADER include/spdk/string.h 00:02:43.690 TEST_HEADER include/spdk/thread.h 00:02:43.690 TEST_HEADER include/spdk/trace.h 00:02:43.690 CC app/spdk_tgt/spdk_tgt.o 00:02:43.690 TEST_HEADER include/spdk/tree.h 00:02:43.690 TEST_HEADER include/spdk/trace_parser.h 00:02:43.690 TEST_HEADER include/spdk/util.h 00:02:43.690 TEST_HEADER include/spdk/ublk.h 00:02:43.690 TEST_HEADER include/spdk/uuid.h 00:02:43.690 TEST_HEADER include/spdk/version.h 00:02:43.690 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:43.690 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:43.690 TEST_HEADER include/spdk/vhost.h 00:02:43.690 TEST_HEADER include/spdk/xor.h 00:02:43.690 TEST_HEADER include/spdk/vmd.h 00:02:43.690 CXX test/cpp_headers/accel.o 00:02:43.690 TEST_HEADER include/spdk/zipf.h 00:02:43.690 CXX test/cpp_headers/accel_module.o 00:02:43.690 CXX test/cpp_headers/assert.o 00:02:43.690 CXX test/cpp_headers/base64.o 00:02:43.690 CXX test/cpp_headers/barrier.o 00:02:43.690 CXX test/cpp_headers/bdev.o 00:02:43.690 CXX test/cpp_headers/bdev_module.o 00:02:43.690 CXX test/cpp_headers/bdev_zone.o 00:02:43.690 CXX test/cpp_headers/blob_bdev.o 00:02:43.690 CXX test/cpp_headers/bit_array.o 00:02:43.690 CXX test/cpp_headers/blobfs_bdev.o 00:02:43.690 CXX test/cpp_headers/bit_pool.o 00:02:43.690 CXX test/cpp_headers/blobfs.o 00:02:43.690 CXX test/cpp_headers/blob.o 00:02:43.690 CXX test/cpp_headers/conf.o 00:02:43.690 CXX test/cpp_headers/config.o 00:02:43.690 CXX test/cpp_headers/cpuset.o 00:02:43.690 CXX test/cpp_headers/crc16.o 00:02:43.690 CXX test/cpp_headers/crc32.o 00:02:43.690 CXX test/cpp_headers/crc64.o 00:02:43.690 CXX test/cpp_headers/dma.o 00:02:43.690 CXX test/cpp_headers/dif.o 00:02:43.690 CXX test/cpp_headers/endian.o 00:02:43.690 CXX test/cpp_headers/env_dpdk.o 00:02:43.690 CXX test/cpp_headers/env.o 00:02:43.690 CXX test/cpp_headers/event.o 00:02:43.690 CXX test/cpp_headers/fd_group.o 00:02:43.690 CXX test/cpp_headers/fd.o 00:02:43.690 CXX test/cpp_headers/file.o 00:02:43.690 CXX test/cpp_headers/ftl.o 00:02:43.690 CXX test/cpp_headers/gpt_spec.o 00:02:43.690 CXX test/cpp_headers/hexlify.o 00:02:43.690 CXX test/cpp_headers/histogram_data.o 00:02:43.690 CXX test/cpp_headers/idxd_spec.o 00:02:43.690 CXX test/cpp_headers/idxd.o 00:02:43.690 CXX test/cpp_headers/ioat.o 00:02:43.690 CXX test/cpp_headers/init.o 00:02:43.690 CXX test/cpp_headers/ioat_spec.o 00:02:43.690 CXX test/cpp_headers/iscsi_spec.o 00:02:43.690 CXX test/cpp_headers/keyring.o 00:02:43.690 CXX test/cpp_headers/json.o 00:02:43.690 CXX test/cpp_headers/jsonrpc.o 00:02:43.690 CXX test/cpp_headers/keyring_module.o 00:02:43.690 CXX test/cpp_headers/log.o 00:02:43.690 CXX test/cpp_headers/likely.o 00:02:43.690 CXX test/cpp_headers/lvol.o 00:02:43.690 CXX test/cpp_headers/memory.o 00:02:43.690 CXX test/cpp_headers/mmio.o 00:02:43.690 CXX test/cpp_headers/nbd.o 00:02:43.690 CXX test/cpp_headers/net.o 00:02:43.690 CXX test/cpp_headers/nvme.o 00:02:43.690 CXX test/cpp_headers/notify.o 00:02:43.690 CXX test/cpp_headers/nvme_ocssd.o 00:02:43.690 CXX test/cpp_headers/nvme_intel.o 00:02:43.690 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:43.690 CXX test/cpp_headers/nvme_spec.o 00:02:43.690 CC examples/util/zipf/zipf.o 00:02:43.690 CXX test/cpp_headers/nvme_zns.o 00:02:43.690 CC examples/ioat/verify/verify.o 00:02:43.690 CXX test/cpp_headers/nvmf_cmd.o 00:02:43.690 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:43.690 CXX test/cpp_headers/nvmf.o 00:02:43.690 CC examples/ioat/perf/perf.o 00:02:43.690 CXX test/cpp_headers/nvmf_transport.o 00:02:43.690 CXX test/cpp_headers/nvmf_spec.o 00:02:43.690 CXX test/cpp_headers/opal.o 00:02:43.690 CXX test/cpp_headers/opal_spec.o 00:02:43.690 CXX test/cpp_headers/pci_ids.o 00:02:43.690 CXX test/cpp_headers/pipe.o 00:02:43.690 CXX test/cpp_headers/queue.o 00:02:43.690 CXX test/cpp_headers/reduce.o 00:02:43.690 CXX test/cpp_headers/rpc.o 00:02:43.690 CC test/app/histogram_perf/histogram_perf.o 00:02:43.690 CXX test/cpp_headers/scheduler.o 00:02:43.690 CXX test/cpp_headers/scsi.o 00:02:43.690 CXX test/cpp_headers/scsi_spec.o 00:02:43.690 CXX test/cpp_headers/sock.o 00:02:43.690 CXX test/cpp_headers/stdinc.o 00:02:43.690 CXX test/cpp_headers/string.o 00:02:43.690 CXX test/cpp_headers/thread.o 00:02:43.690 CXX test/cpp_headers/trace.o 00:02:43.690 CXX test/cpp_headers/trace_parser.o 00:02:43.690 CXX test/cpp_headers/tree.o 00:02:43.690 CXX test/cpp_headers/ublk.o 00:02:43.690 CXX test/cpp_headers/util.o 00:02:43.690 CXX test/cpp_headers/uuid.o 00:02:43.690 CXX test/cpp_headers/version.o 00:02:43.690 CC test/app/jsoncat/jsoncat.o 00:02:43.690 CC test/app/stub/stub.o 00:02:43.690 CC test/thread/poller_perf/poller_perf.o 00:02:43.961 CC test/env/memory/memory_ut.o 00:02:43.961 CC test/env/vtophys/vtophys.o 00:02:43.961 CC test/env/pci/pci_ut.o 00:02:43.961 CC test/dma/test_dma/test_dma.o 00:02:43.961 CC app/fio/nvme/fio_plugin.o 00:02:43.961 CXX test/cpp_headers/vfio_user_pci.o 00:02:43.961 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:43.961 CXX test/cpp_headers/vfio_user_spec.o 00:02:43.961 CC test/app/bdev_svc/bdev_svc.o 00:02:43.961 LINK spdk_lspci 00:02:43.961 CC app/fio/bdev/fio_plugin.o 00:02:44.242 LINK rpc_client_test 00:02:44.242 LINK spdk_nvme_discover 00:02:44.508 LINK nvmf_tgt 00:02:44.508 LINK iscsi_tgt 00:02:44.508 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:44.508 LINK histogram_perf 00:02:44.508 CC test/env/mem_callbacks/mem_callbacks.o 00:02:44.508 CXX test/cpp_headers/vhost.o 00:02:44.508 LINK spdk_trace_record 00:02:44.508 LINK zipf 00:02:44.508 CXX test/cpp_headers/vmd.o 00:02:44.508 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:44.508 LINK vtophys 00:02:44.508 CXX test/cpp_headers/xor.o 00:02:44.508 LINK jsoncat 00:02:44.508 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:44.508 CXX test/cpp_headers/zipf.o 00:02:44.508 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:44.508 LINK spdk_tgt 00:02:44.508 LINK poller_perf 00:02:44.508 LINK env_dpdk_post_init 00:02:44.765 LINK stub 00:02:44.765 LINK interrupt_tgt 00:02:44.765 LINK verify 00:02:44.765 LINK ioat_perf 00:02:44.765 LINK bdev_svc 00:02:44.765 LINK spdk_dd 00:02:44.765 LINK spdk_trace 00:02:45.022 LINK pci_ut 00:02:45.022 LINK test_dma 00:02:45.022 LINK nvme_fuzz 00:02:45.022 LINK spdk_bdev 00:02:45.280 CC examples/idxd/perf/perf.o 00:02:45.280 LINK vhost_fuzz 00:02:45.280 CC examples/vmd/lsvmd/lsvmd.o 00:02:45.280 CC examples/sock/hello_world/hello_sock.o 00:02:45.280 CC examples/vmd/led/led.o 00:02:45.280 CC test/event/reactor_perf/reactor_perf.o 00:02:45.280 CC test/event/event_perf/event_perf.o 00:02:45.280 CC test/event/reactor/reactor.o 00:02:45.280 CC examples/thread/thread/thread_ex.o 00:02:45.280 CC app/vhost/vhost.o 00:02:45.280 CC test/event/app_repeat/app_repeat.o 00:02:45.280 CC test/event/scheduler/scheduler.o 00:02:45.280 LINK mem_callbacks 00:02:45.280 LINK lsvmd 00:02:45.280 LINK reactor 00:02:45.280 LINK event_perf 00:02:45.280 LINK led 00:02:45.280 LINK reactor_perf 00:02:45.538 LINK spdk_top 00:02:45.538 LINK spdk_nvme_identify 00:02:45.538 LINK spdk_nvme_perf 00:02:45.538 LINK app_repeat 00:02:45.538 LINK vhost 00:02:45.538 LINK hello_sock 00:02:45.538 LINK scheduler 00:02:45.538 LINK thread 00:02:45.538 LINK idxd_perf 00:02:45.538 CC test/nvme/cuse/cuse.o 00:02:45.538 CC test/nvme/e2edp/nvme_dp.o 00:02:45.538 CC test/nvme/reset/reset.o 00:02:45.538 CC test/nvme/startup/startup.o 00:02:45.538 CC test/nvme/reserve/reserve.o 00:02:45.538 CC test/nvme/aer/aer.o 00:02:45.538 CC test/nvme/fdp/fdp.o 00:02:45.538 CC test/nvme/overhead/overhead.o 00:02:45.538 CC test/nvme/simple_copy/simple_copy.o 00:02:45.538 CC test/nvme/boot_partition/boot_partition.o 00:02:45.538 CC test/nvme/sgl/sgl.o 00:02:45.538 CC test/nvme/connect_stress/connect_stress.o 00:02:45.538 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:45.538 CC test/nvme/err_injection/err_injection.o 00:02:45.538 CC test/nvme/compliance/nvme_compliance.o 00:02:45.538 CC test/nvme/fused_ordering/fused_ordering.o 00:02:45.538 CC test/blobfs/mkfs/mkfs.o 00:02:45.538 CC test/accel/dif/dif.o 00:02:45.795 LINK memory_ut 00:02:45.795 CC test/lvol/esnap/esnap.o 00:02:45.795 LINK doorbell_aers 00:02:45.795 LINK boot_partition 00:02:45.795 LINK startup 00:02:45.795 LINK connect_stress 00:02:45.795 LINK err_injection 00:02:45.795 LINK mkfs 00:02:45.795 LINK reserve 00:02:45.795 LINK fused_ordering 00:02:45.795 LINK simple_copy 00:02:46.052 LINK nvme_dp 00:02:46.052 LINK reset 00:02:46.052 LINK spdk_nvme 00:02:46.052 LINK sgl 00:02:46.052 LINK overhead 00:02:46.052 LINK aer 00:02:46.052 LINK fdp 00:02:46.052 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:46.052 CC examples/nvme/arbitration/arbitration.o 00:02:46.052 CC examples/nvme/abort/abort.o 00:02:46.052 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:46.052 CC examples/nvme/hello_world/hello_world.o 00:02:46.052 CC examples/nvme/hotplug/hotplug.o 00:02:46.052 CC examples/nvme/reconnect/reconnect.o 00:02:46.052 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:46.052 LINK nvme_compliance 00:02:46.052 CC examples/accel/perf/accel_perf.o 00:02:46.052 CC examples/blob/hello_world/hello_blob.o 00:02:46.310 CC examples/blob/cli/blobcli.o 00:02:46.310 LINK dif 00:02:46.310 LINK cmb_copy 00:02:46.310 LINK pmr_persistence 00:02:46.310 LINK hello_world 00:02:46.310 LINK hotplug 00:02:46.310 LINK arbitration 00:02:46.567 LINK hello_blob 00:02:46.567 LINK reconnect 00:02:46.567 LINK abort 00:02:46.824 LINK nvme_manage 00:02:46.824 LINK accel_perf 00:02:46.824 LINK blobcli 00:02:46.824 LINK cuse 00:02:46.824 LINK iscsi_fuzz 00:02:46.824 CC test/bdev/bdevio/bdevio.o 00:02:47.389 CC examples/bdev/bdevperf/bdevperf.o 00:02:47.389 CC examples/bdev/hello_world/hello_bdev.o 00:02:47.389 LINK bdevio 00:02:47.648 LINK hello_bdev 00:02:48.213 LINK bdevperf 00:02:49.145 CC examples/nvmf/nvmf/nvmf.o 00:02:49.403 LINK nvmf 00:02:52.719 LINK esnap 00:02:52.719 00:02:52.719 real 1m42.436s 00:02:52.719 user 16m31.086s 00:02:52.720 sys 5m31.849s 00:02:52.720 10:44:59 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:52.720 10:44:59 make -- common/autotest_common.sh@10 -- $ set +x 00:02:52.720 ************************************ 00:02:52.720 END TEST make 00:02:52.720 ************************************ 00:02:52.720 10:44:59 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:52.720 10:44:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:52.720 10:44:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:52.720 10:44:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.720 10:44:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:52.720 10:44:59 -- pm/common@44 -- $ pid=3325978 00:02:52.720 10:44:59 -- pm/common@50 -- $ kill -TERM 3325978 00:02:52.720 10:44:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.720 10:44:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:52.720 10:44:59 -- pm/common@44 -- $ pid=3325980 00:02:52.720 10:44:59 -- pm/common@50 -- $ kill -TERM 3325980 00:02:52.720 10:44:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.720 10:44:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:52.720 10:44:59 -- pm/common@44 -- $ pid=3325982 00:02:52.720 10:44:59 -- pm/common@50 -- $ kill -TERM 3325982 00:02:52.720 10:44:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.720 10:44:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:52.720 10:44:59 -- pm/common@44 -- $ pid=3326005 00:02:52.720 10:44:59 -- pm/common@50 -- $ sudo -E kill -TERM 3326005 00:02:52.720 10:44:59 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:02:52.720 10:44:59 -- nvmf/common.sh@7 -- # uname -s 00:02:52.978 10:44:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:52.978 10:44:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:52.978 10:44:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:52.978 10:44:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:52.978 10:44:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:52.978 10:44:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:52.978 10:44:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:52.978 10:44:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:52.978 10:44:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:52.978 10:44:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:52.978 10:44:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:02:52.978 10:44:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:02:52.978 10:44:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:52.978 10:44:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:52.978 10:44:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:52.978 10:44:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:52.978 10:44:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:02:52.978 10:44:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:52.978 10:44:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:52.978 10:44:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:52.978 10:44:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.978 10:44:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.978 10:44:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.978 10:44:59 -- paths/export.sh@5 -- # export PATH 00:02:52.978 10:44:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.978 10:44:59 -- nvmf/common.sh@47 -- # : 0 00:02:52.978 10:44:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:52.978 10:44:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:52.978 10:44:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:52.978 10:44:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:52.978 10:44:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:52.978 10:44:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:52.978 10:44:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:52.978 10:44:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:52.978 10:44:59 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:52.978 10:44:59 -- spdk/autotest.sh@32 -- # uname -s 00:02:52.978 10:44:59 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:52.978 10:44:59 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:52.978 10:44:59 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:52.978 10:44:59 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:52.978 10:44:59 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:52.978 10:44:59 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:52.978 10:44:59 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:52.978 10:44:59 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:52.978 10:44:59 -- spdk/autotest.sh@48 -- # udevadm_pid=3399387 00:02:52.978 10:44:59 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:52.978 10:44:59 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:52.978 10:44:59 -- pm/common@17 -- # local monitor 00:02:52.978 10:44:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.978 10:44:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.978 10:44:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.978 10:44:59 -- pm/common@21 -- # date +%s 00:02:52.978 10:44:59 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.978 10:44:59 -- pm/common@21 -- # date +%s 00:02:52.978 10:44:59 -- pm/common@25 -- # sleep 1 00:02:52.978 10:44:59 -- pm/common@21 -- # date +%s 00:02:52.978 10:44:59 -- pm/common@21 -- # date +%s 00:02:52.978 10:44:59 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721897099 00:02:52.978 10:44:59 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721897099 00:02:52.978 10:44:59 -- pm/common@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721897099 00:02:52.978 10:44:59 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721897099 00:02:52.978 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721897099_collect-vmstat.pm.log 00:02:52.978 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721897099_collect-cpu-load.pm.log 00:02:52.978 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721897099_collect-cpu-temp.pm.log 00:02:52.978 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721897099_collect-bmc-pm.bmc.pm.log 00:02:53.909 10:45:00 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:53.909 10:45:00 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:53.909 10:45:00 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:53.909 10:45:00 -- common/autotest_common.sh@10 -- # set +x 00:02:53.909 10:45:00 -- spdk/autotest.sh@59 -- # create_test_list 00:02:53.909 10:45:00 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:53.909 10:45:00 -- common/autotest_common.sh@10 -- # set +x 00:02:53.909 10:45:00 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:02:53.909 10:45:00 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:53.909 10:45:00 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:53.909 10:45:00 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:02:53.909 10:45:00 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:53.909 10:45:00 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:53.909 10:45:00 -- common/autotest_common.sh@1455 -- # uname 00:02:53.909 10:45:00 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:53.909 10:45:00 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:53.909 10:45:00 -- common/autotest_common.sh@1475 -- # uname 00:02:53.909 10:45:00 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:53.909 10:45:00 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:53.909 10:45:00 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:53.909 10:45:00 -- spdk/autotest.sh@72 -- # hash lcov 00:02:53.909 10:45:00 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:53.909 10:45:00 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:53.909 --rc lcov_branch_coverage=1 00:02:53.909 --rc lcov_function_coverage=1 00:02:53.909 --rc genhtml_branch_coverage=1 00:02:53.909 --rc genhtml_function_coverage=1 00:02:53.909 --rc genhtml_legend=1 00:02:53.909 --rc geninfo_all_blocks=1 00:02:53.909 ' 00:02:53.909 10:45:00 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:53.909 --rc lcov_branch_coverage=1 00:02:53.909 --rc lcov_function_coverage=1 00:02:53.909 --rc genhtml_branch_coverage=1 00:02:53.909 --rc genhtml_function_coverage=1 00:02:53.909 --rc genhtml_legend=1 00:02:53.909 --rc geninfo_all_blocks=1 00:02:53.909 ' 00:02:53.909 10:45:00 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:53.909 --rc lcov_branch_coverage=1 00:02:53.909 --rc lcov_function_coverage=1 00:02:53.909 --rc genhtml_branch_coverage=1 00:02:53.909 --rc genhtml_function_coverage=1 00:02:53.909 --rc genhtml_legend=1 00:02:53.909 --rc geninfo_all_blocks=1 00:02:53.909 --no-external' 00:02:53.909 10:45:00 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:53.909 --rc lcov_branch_coverage=1 00:02:53.909 --rc lcov_function_coverage=1 00:02:53.909 --rc genhtml_branch_coverage=1 00:02:53.909 --rc genhtml_function_coverage=1 00:02:53.909 --rc genhtml_legend=1 00:02:53.909 --rc geninfo_all_blocks=1 00:02:53.909 --no-external' 00:02:53.909 10:45:00 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:54.166 lcov: LCOV version 1.14 00:02:54.166 10:45:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:03:12.254 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:12.254 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:24.448 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:24.448 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:24.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:24.449 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:24.450 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:24.450 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:27.733 10:45:34 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:27.733 10:45:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:27.733 10:45:34 -- common/autotest_common.sh@10 -- # set +x 00:03:27.733 10:45:34 -- spdk/autotest.sh@91 -- # rm -f 00:03:27.733 10:45:34 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.077 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:31.077 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:31.077 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:31.077 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:31.077 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:31.077 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:31.336 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:31.336 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:31.336 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:31.336 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:31.336 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:31.336 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:31.336 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:31.336 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:31.336 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:31.595 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:31.595 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:31.595 10:45:38 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:31.595 10:45:38 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:31.595 10:45:38 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:31.595 10:45:38 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:31.595 10:45:38 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:31.595 10:45:38 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:31.595 10:45:38 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:31.595 10:45:38 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:31.595 10:45:38 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:31.595 10:45:38 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:31.595 10:45:38 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:31.595 10:45:38 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:31.595 10:45:38 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:31.595 10:45:38 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:31.595 10:45:38 -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:31.595 No valid GPT data, bailing 00:03:31.595 10:45:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:31.595 10:45:38 -- scripts/common.sh@391 -- # pt= 00:03:31.595 10:45:38 -- scripts/common.sh@392 -- # return 1 00:03:31.595 10:45:38 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:31.595 1+0 records in 00:03:31.595 1+0 records out 00:03:31.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650321 s, 161 MB/s 00:03:31.595 10:45:38 -- spdk/autotest.sh@118 -- # sync 00:03:31.595 10:45:38 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:31.595 10:45:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:31.595 10:45:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:38.155 10:45:45 -- spdk/autotest.sh@124 -- # uname -s 00:03:38.156 10:45:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:38.156 10:45:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:38.156 10:45:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.156 10:45:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.156 10:45:45 -- common/autotest_common.sh@10 -- # set +x 00:03:38.413 ************************************ 00:03:38.413 START TEST setup.sh 00:03:38.413 ************************************ 00:03:38.413 10:45:45 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:38.413 * Looking for test storage... 00:03:38.413 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:38.413 10:45:45 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:38.413 10:45:45 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:38.413 10:45:45 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:38.413 10:45:45 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.413 10:45:45 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.414 10:45:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:38.414 ************************************ 00:03:38.414 START TEST acl 00:03:38.414 ************************************ 00:03:38.414 10:45:45 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:38.672 * Looking for test storage... 00:03:38.672 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:38.672 10:45:45 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:38.672 10:45:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:38.672 10:45:45 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:38.672 10:45:45 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:38.672 10:45:45 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:38.672 10:45:45 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:38.672 10:45:45 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:38.672 10:45:45 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.672 10:45:45 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:38.672 10:45:45 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:38.672 10:45:45 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:38.672 10:45:45 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:38.672 10:45:45 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:38.672 10:45:45 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:38.672 10:45:45 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.672 10:45:45 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.856 10:45:49 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:42.856 10:45:49 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:42.856 10:45:49 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:42.856 10:45:49 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:42.856 10:45:49 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.856 10:45:49 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:03:47.044 Hugepages 00:03:47.044 node hugesize free / total 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.044 00:03:47.044 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.044 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:47.045 10:45:53 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:47.045 10:45:53 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:47.045 10:45:53 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:47.045 10:45:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:47.045 ************************************ 00:03:47.045 START TEST denied 00:03:47.045 ************************************ 00:03:47.045 10:45:53 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:47.045 10:45:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:47.045 10:45:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:47.045 10:45:53 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:47.045 10:45:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.045 10:45:53 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:51.232 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:51.232 10:45:58 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:51.232 10:45:58 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:51.232 10:45:58 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:51.232 10:45:58 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:51.232 10:45:58 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:51.232 10:45:58 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:51.232 10:45:58 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:51.232 10:45:58 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:51.232 10:45:58 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.232 10:45:58 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.560 00:03:56.560 real 0m9.866s 00:03:56.560 user 0m3.167s 00:03:56.560 sys 0m6.027s 00:03:56.560 10:46:03 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.560 10:46:03 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:56.560 ************************************ 00:03:56.560 END TEST denied 00:03:56.560 ************************************ 00:03:56.560 10:46:03 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:56.561 10:46:03 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.561 10:46:03 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.561 10:46:03 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:56.818 ************************************ 00:03:56.818 START TEST allowed 00:03:56.818 ************************************ 00:03:56.818 10:46:03 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:56.818 10:46:03 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:56.818 10:46:03 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:56.818 10:46:03 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:56.819 10:46:03 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.819 10:46:03 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:03.397 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:03.397 10:46:09 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:03.397 10:46:09 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:03.397 10:46:09 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:03.397 10:46:09 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.397 10:46:09 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:07.587 00:04:07.587 real 0m10.188s 00:04:07.587 user 0m2.831s 00:04:07.587 sys 0m5.601s 00:04:07.587 10:46:13 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.587 10:46:13 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:07.587 ************************************ 00:04:07.587 END TEST allowed 00:04:07.587 ************************************ 00:04:07.587 00:04:07.587 real 0m28.489s 00:04:07.587 user 0m8.773s 00:04:07.587 sys 0m17.300s 00:04:07.587 10:46:13 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.587 10:46:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:07.587 ************************************ 00:04:07.587 END TEST acl 00:04:07.587 ************************************ 00:04:07.587 10:46:13 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:04:07.587 10:46:13 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.587 10:46:13 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.587 10:46:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.587 ************************************ 00:04:07.587 START TEST hugepages 00:04:07.587 ************************************ 00:04:07.587 10:46:14 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:04:07.587 * Looking for test storage... 00:04:07.587 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 41114136 kB' 'MemAvailable: 45105604 kB' 'Buffers: 6064 kB' 'Cached: 10832804 kB' 'SwapCached: 0 kB' 'Active: 7658692 kB' 'Inactive: 3689560 kB' 'Active(anon): 7260268 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513316 kB' 'Mapped: 217164 kB' 'Shmem: 6750884 kB' 'KReclaimable: 551192 kB' 'Slab: 1204748 kB' 'SReclaimable: 551192 kB' 'SUnreclaim: 653556 kB' 'KernelStack: 22160 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439060 kB' 'Committed_AS: 8771276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.587 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.588 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.589 10:46:14 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:07.589 10:46:14 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.589 10:46:14 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.589 10:46:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.589 ************************************ 00:04:07.589 START TEST default_setup 00:04:07.589 ************************************ 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.589 10:46:14 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:11.779 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:11.779 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:13.155 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.155 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43276624 kB' 'MemAvailable: 47267708 kB' 'Buffers: 6064 kB' 'Cached: 10832936 kB' 'SwapCached: 0 kB' 'Active: 7676224 kB' 'Inactive: 3689560 kB' 'Active(anon): 7277800 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530392 kB' 'Mapped: 216408 kB' 'Shmem: 6751016 kB' 'KReclaimable: 550808 kB' 'Slab: 1203036 kB' 'SReclaimable: 550808 kB' 'SUnreclaim: 652228 kB' 'KernelStack: 22144 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8752920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.156 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.417 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:13.418 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43278908 kB' 'MemAvailable: 47269960 kB' 'Buffers: 6064 kB' 'Cached: 10832940 kB' 'SwapCached: 0 kB' 'Active: 7675232 kB' 'Inactive: 3689560 kB' 'Active(anon): 7276808 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529372 kB' 'Mapped: 216396 kB' 'Shmem: 6751020 kB' 'KReclaimable: 550776 kB' 'Slab: 1202996 kB' 'SReclaimable: 550776 kB' 'SUnreclaim: 652220 kB' 'KernelStack: 22208 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8754676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.419 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.420 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43277360 kB' 'MemAvailable: 47268412 kB' 'Buffers: 6064 kB' 'Cached: 10832968 kB' 'SwapCached: 0 kB' 'Active: 7675608 kB' 'Inactive: 3689560 kB' 'Active(anon): 7277184 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529672 kB' 'Mapped: 216320 kB' 'Shmem: 6751048 kB' 'KReclaimable: 550776 kB' 'Slab: 1202960 kB' 'SReclaimable: 550776 kB' 'SUnreclaim: 652184 kB' 'KernelStack: 22176 kB' 'PageTables: 9292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8755072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.421 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.422 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:13.423 nr_hugepages=1024 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:13.423 resv_hugepages=0 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:13.423 surplus_hugepages=0 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:13.423 anon_hugepages=0 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43276708 kB' 'MemAvailable: 47267760 kB' 'Buffers: 6064 kB' 'Cached: 10832988 kB' 'SwapCached: 0 kB' 'Active: 7675320 kB' 'Inactive: 3689560 kB' 'Active(anon): 7276896 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529300 kB' 'Mapped: 216320 kB' 'Shmem: 6751068 kB' 'KReclaimable: 550776 kB' 'Slab: 1202960 kB' 'SReclaimable: 550776 kB' 'SUnreclaim: 652184 kB' 'KernelStack: 22240 kB' 'PageTables: 9308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8755092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218876 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.423 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:13.424 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25323780 kB' 'MemUsed: 7315360 kB' 'SwapCached: 0 kB' 'Active: 2912288 kB' 'Inactive: 231284 kB' 'Active(anon): 2779240 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2767748 kB' 'Mapped: 105744 kB' 'AnonPages: 379064 kB' 'Shmem: 2403416 kB' 'KernelStack: 12648 kB' 'PageTables: 6072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219952 kB' 'Slab: 534156 kB' 'SReclaimable: 219952 kB' 'SUnreclaim: 314204 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.425 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:13.426 node0=1024 expecting 1024 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:13.426 00:04:13.426 real 0m6.190s 00:04:13.426 user 0m1.519s 00:04:13.426 sys 0m2.818s 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:13.426 10:46:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:13.426 ************************************ 00:04:13.426 END TEST default_setup 00:04:13.426 ************************************ 00:04:13.426 10:46:20 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:13.426 10:46:20 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:13.426 10:46:20 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:13.426 10:46:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:13.426 ************************************ 00:04:13.426 START TEST per_node_1G_alloc 00:04:13.426 ************************************ 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.426 10:46:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:17.621 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.621 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43258948 kB' 'MemAvailable: 47249992 kB' 'Buffers: 6064 kB' 'Cached: 10833092 kB' 'SwapCached: 0 kB' 'Active: 7673396 kB' 'Inactive: 3689560 kB' 'Active(anon): 7274972 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526540 kB' 'Mapped: 215192 kB' 'Shmem: 6751172 kB' 'KReclaimable: 550768 kB' 'Slab: 1202808 kB' 'SReclaimable: 550768 kB' 'SUnreclaim: 652040 kB' 'KernelStack: 22096 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8742760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.621 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.622 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43259080 kB' 'MemAvailable: 47250124 kB' 'Buffers: 6064 kB' 'Cached: 10833108 kB' 'SwapCached: 0 kB' 'Active: 7672012 kB' 'Inactive: 3689560 kB' 'Active(anon): 7273588 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525588 kB' 'Mapped: 215112 kB' 'Shmem: 6751188 kB' 'KReclaimable: 550768 kB' 'Slab: 1202772 kB' 'SReclaimable: 550768 kB' 'SUnreclaim: 652004 kB' 'KernelStack: 22048 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8742780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.623 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.624 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43259332 kB' 'MemAvailable: 47250376 kB' 'Buffers: 6064 kB' 'Cached: 10833112 kB' 'SwapCached: 0 kB' 'Active: 7672372 kB' 'Inactive: 3689560 kB' 'Active(anon): 7273948 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525988 kB' 'Mapped: 215112 kB' 'Shmem: 6751192 kB' 'KReclaimable: 550768 kB' 'Slab: 1202772 kB' 'SReclaimable: 550768 kB' 'SUnreclaim: 652004 kB' 'KernelStack: 22064 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8742800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.625 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.626 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.627 nr_hugepages=1024 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.627 resv_hugepages=0 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.627 surplus_hugepages=0 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.627 anon_hugepages=0 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43258988 kB' 'MemAvailable: 47250032 kB' 'Buffers: 6064 kB' 'Cached: 10833156 kB' 'SwapCached: 0 kB' 'Active: 7672064 kB' 'Inactive: 3689560 kB' 'Active(anon): 7273640 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525588 kB' 'Mapped: 215112 kB' 'Shmem: 6751236 kB' 'KReclaimable: 550768 kB' 'Slab: 1202772 kB' 'SReclaimable: 550768 kB' 'SUnreclaim: 652004 kB' 'KernelStack: 22048 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8742824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.627 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.628 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26372208 kB' 'MemUsed: 6266932 kB' 'SwapCached: 0 kB' 'Active: 2910020 kB' 'Inactive: 231284 kB' 'Active(anon): 2776972 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2767776 kB' 'Mapped: 104972 kB' 'AnonPages: 376672 kB' 'Shmem: 2403444 kB' 'KernelStack: 12424 kB' 'PageTables: 5532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219944 kB' 'Slab: 534176 kB' 'SReclaimable: 219944 kB' 'SUnreclaim: 314232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.629 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.630 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16887828 kB' 'MemUsed: 10768252 kB' 'SwapCached: 0 kB' 'Active: 4762388 kB' 'Inactive: 3458276 kB' 'Active(anon): 4497012 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8071484 kB' 'Mapped: 110140 kB' 'AnonPages: 149248 kB' 'Shmem: 4347832 kB' 'KernelStack: 9624 kB' 'PageTables: 3056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 330824 kB' 'Slab: 668588 kB' 'SReclaimable: 330824 kB' 'SUnreclaim: 337764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.631 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:17.632 node0=512 expecting 512 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:17.632 node1=512 expecting 512 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:17.632 00:04:17.632 real 0m4.220s 00:04:17.632 user 0m1.506s 00:04:17.632 sys 0m2.768s 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.632 10:46:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:17.632 ************************************ 00:04:17.632 END TEST per_node_1G_alloc 00:04:17.632 ************************************ 00:04:17.891 10:46:24 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:17.891 10:46:24 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.891 10:46:24 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.891 10:46:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:17.891 ************************************ 00:04:17.891 START TEST even_2G_alloc 00:04:17.891 ************************************ 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.891 10:46:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:22.135 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:22.135 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43264024 kB' 'MemAvailable: 47255068 kB' 'Buffers: 6064 kB' 'Cached: 10833272 kB' 'SwapCached: 0 kB' 'Active: 7673328 kB' 'Inactive: 3689560 kB' 'Active(anon): 7274904 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526884 kB' 'Mapped: 215148 kB' 'Shmem: 6751352 kB' 'KReclaimable: 550768 kB' 'Slab: 1202456 kB' 'SReclaimable: 550768 kB' 'SUnreclaim: 651688 kB' 'KernelStack: 22000 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8743708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.136 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43263652 kB' 'MemAvailable: 47254696 kB' 'Buffers: 6064 kB' 'Cached: 10833276 kB' 'SwapCached: 0 kB' 'Active: 7673192 kB' 'Inactive: 3689560 kB' 'Active(anon): 7274768 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526760 kB' 'Mapped: 215120 kB' 'Shmem: 6751356 kB' 'KReclaimable: 550768 kB' 'Slab: 1202480 kB' 'SReclaimable: 550768 kB' 'SUnreclaim: 651712 kB' 'KernelStack: 22064 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8743728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.137 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43265368 kB' 'MemAvailable: 47256412 kB' 'Buffers: 6064 kB' 'Cached: 10833292 kB' 'SwapCached: 0 kB' 'Active: 7673220 kB' 'Inactive: 3689560 kB' 'Active(anon): 7274796 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526748 kB' 'Mapped: 215120 kB' 'Shmem: 6751372 kB' 'KReclaimable: 550768 kB' 'Slab: 1202480 kB' 'SReclaimable: 550768 kB' 'SUnreclaim: 651712 kB' 'KernelStack: 22064 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8743748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.138 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.139 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.140 nr_hugepages=1024 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.140 resv_hugepages=0 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.140 surplus_hugepages=0 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.140 anon_hugepages=0 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43265624 kB' 'MemAvailable: 47256668 kB' 'Buffers: 6064 kB' 'Cached: 10833316 kB' 'SwapCached: 0 kB' 'Active: 7673236 kB' 'Inactive: 3689560 kB' 'Active(anon): 7274812 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526748 kB' 'Mapped: 215120 kB' 'Shmem: 6751396 kB' 'KReclaimable: 550768 kB' 'Slab: 1202480 kB' 'SReclaimable: 550768 kB' 'SUnreclaim: 651712 kB' 'KernelStack: 22064 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8743768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218732 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.140 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26372732 kB' 'MemUsed: 6266408 kB' 'SwapCached: 0 kB' 'Active: 2911160 kB' 'Inactive: 231284 kB' 'Active(anon): 2778112 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2767832 kB' 'Mapped: 104980 kB' 'AnonPages: 377820 kB' 'Shmem: 2403500 kB' 'KernelStack: 12456 kB' 'PageTables: 5632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219944 kB' 'Slab: 533936 kB' 'SReclaimable: 219944 kB' 'SUnreclaim: 313992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.141 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.142 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16893516 kB' 'MemUsed: 10762564 kB' 'SwapCached: 0 kB' 'Active: 4762104 kB' 'Inactive: 3458276 kB' 'Active(anon): 4496728 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8071588 kB' 'Mapped: 110140 kB' 'AnonPages: 148924 kB' 'Shmem: 4347936 kB' 'KernelStack: 9608 kB' 'PageTables: 3004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 330824 kB' 'Slab: 668544 kB' 'SReclaimable: 330824 kB' 'SUnreclaim: 337720 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.143 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:22.144 node0=512 expecting 512 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:22.144 node1=512 expecting 512 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:22.144 00:04:22.144 real 0m4.401s 00:04:22.144 user 0m1.630s 00:04:22.144 sys 0m2.850s 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.144 10:46:29 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.144 ************************************ 00:04:22.144 END TEST even_2G_alloc 00:04:22.144 ************************************ 00:04:22.404 10:46:29 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:22.404 10:46:29 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.404 10:46:29 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.404 10:46:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.404 ************************************ 00:04:22.404 START TEST odd_alloc 00:04:22.404 ************************************ 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.404 10:46:29 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:26.595 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:26.595 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43245424 kB' 'MemAvailable: 47236436 kB' 'Buffers: 6064 kB' 'Cached: 10833448 kB' 'SwapCached: 0 kB' 'Active: 7677412 kB' 'Inactive: 3689560 kB' 'Active(anon): 7278988 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530776 kB' 'Mapped: 215640 kB' 'Shmem: 6751528 kB' 'KReclaimable: 550736 kB' 'Slab: 1202164 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 651428 kB' 'KernelStack: 22032 kB' 'PageTables: 8532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8748960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218780 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.595 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.596 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43241644 kB' 'MemAvailable: 47232656 kB' 'Buffers: 6064 kB' 'Cached: 10833448 kB' 'SwapCached: 0 kB' 'Active: 7680276 kB' 'Inactive: 3689560 kB' 'Active(anon): 7281852 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533304 kB' 'Mapped: 215988 kB' 'Shmem: 6751528 kB' 'KReclaimable: 550736 kB' 'Slab: 1202156 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 651420 kB' 'KernelStack: 22048 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8750836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218752 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.597 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.598 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43242772 kB' 'MemAvailable: 47233784 kB' 'Buffers: 6064 kB' 'Cached: 10833468 kB' 'SwapCached: 0 kB' 'Active: 7673896 kB' 'Inactive: 3689560 kB' 'Active(anon): 7275472 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527272 kB' 'Mapped: 215136 kB' 'Shmem: 6751548 kB' 'KReclaimable: 550736 kB' 'Slab: 1202232 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 651496 kB' 'KernelStack: 22048 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8744740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.599 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.600 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:26.601 nr_hugepages=1025 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.601 resv_hugepages=0 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.601 surplus_hugepages=0 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.601 anon_hugepages=0 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43243300 kB' 'MemAvailable: 47234312 kB' 'Buffers: 6064 kB' 'Cached: 10833468 kB' 'SwapCached: 0 kB' 'Active: 7674400 kB' 'Inactive: 3689560 kB' 'Active(anon): 7275976 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527776 kB' 'Mapped: 215136 kB' 'Shmem: 6751548 kB' 'KReclaimable: 550736 kB' 'Slab: 1202232 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 651496 kB' 'KernelStack: 22048 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486612 kB' 'Committed_AS: 8744760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218748 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.601 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.602 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26347364 kB' 'MemUsed: 6291776 kB' 'SwapCached: 0 kB' 'Active: 2912484 kB' 'Inactive: 231284 kB' 'Active(anon): 2779436 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2767884 kB' 'Mapped: 104996 kB' 'AnonPages: 379184 kB' 'Shmem: 2403552 kB' 'KernelStack: 12488 kB' 'PageTables: 5668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219944 kB' 'Slab: 533548 kB' 'SReclaimable: 219944 kB' 'SUnreclaim: 313604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.603 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 16896360 kB' 'MemUsed: 10759720 kB' 'SwapCached: 0 kB' 'Active: 4761916 kB' 'Inactive: 3458276 kB' 'Active(anon): 4496540 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8071712 kB' 'Mapped: 110140 kB' 'AnonPages: 148568 kB' 'Shmem: 4348060 kB' 'KernelStack: 9592 kB' 'PageTables: 3008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 330792 kB' 'Slab: 668684 kB' 'SReclaimable: 330792 kB' 'SUnreclaim: 337892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.604 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.605 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.864 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:26.865 node0=512 expecting 513 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:26.865 node1=513 expecting 512 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:26.865 00:04:26.865 real 0m4.447s 00:04:26.865 user 0m1.636s 00:04:26.865 sys 0m2.890s 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.865 10:46:33 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:26.865 ************************************ 00:04:26.865 END TEST odd_alloc 00:04:26.865 ************************************ 00:04:26.865 10:46:33 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:26.865 10:46:33 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.865 10:46:33 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.865 10:46:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.865 ************************************ 00:04:26.865 START TEST custom_alloc 00:04:26.865 ************************************ 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:26.865 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.866 10:46:33 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:31.056 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:31.056 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42188892 kB' 'MemAvailable: 46179904 kB' 'Buffers: 6064 kB' 'Cached: 10833604 kB' 'SwapCached: 0 kB' 'Active: 7676156 kB' 'Inactive: 3689560 kB' 'Active(anon): 7277732 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528780 kB' 'Mapped: 215300 kB' 'Shmem: 6751684 kB' 'KReclaimable: 550736 kB' 'Slab: 1203496 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 652760 kB' 'KernelStack: 22256 kB' 'PageTables: 9308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8748228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219036 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.056 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.057 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.058 10:46:37 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42189316 kB' 'MemAvailable: 46180328 kB' 'Buffers: 6064 kB' 'Cached: 10833604 kB' 'SwapCached: 0 kB' 'Active: 7676320 kB' 'Inactive: 3689560 kB' 'Active(anon): 7277896 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528948 kB' 'Mapped: 215300 kB' 'Shmem: 6751684 kB' 'KReclaimable: 550736 kB' 'Slab: 1203472 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 652736 kB' 'KernelStack: 22160 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8745380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218892 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.058 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.059 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42190536 kB' 'MemAvailable: 46181548 kB' 'Buffers: 6064 kB' 'Cached: 10833624 kB' 'SwapCached: 0 kB' 'Active: 7675160 kB' 'Inactive: 3689560 kB' 'Active(anon): 7276736 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527864 kB' 'Mapped: 215160 kB' 'Shmem: 6751704 kB' 'KReclaimable: 550736 kB' 'Slab: 1203556 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 652820 kB' 'KernelStack: 22048 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8745404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218892 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.060 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.061 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:31.062 nr_hugepages=1536 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.062 resv_hugepages=0 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.062 surplus_hugepages=0 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.062 anon_hugepages=0 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 42190848 kB' 'MemAvailable: 46181860 kB' 'Buffers: 6064 kB' 'Cached: 10833644 kB' 'SwapCached: 0 kB' 'Active: 7675340 kB' 'Inactive: 3689560 kB' 'Active(anon): 7276916 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528440 kB' 'Mapped: 215160 kB' 'Shmem: 6751724 kB' 'KReclaimable: 550736 kB' 'Slab: 1203556 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 652820 kB' 'KernelStack: 22080 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963348 kB' 'Committed_AS: 8745056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218860 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.062 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.063 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26321444 kB' 'MemUsed: 6317696 kB' 'SwapCached: 0 kB' 'Active: 2911288 kB' 'Inactive: 231284 kB' 'Active(anon): 2778240 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2767904 kB' 'Mapped: 105012 kB' 'AnonPages: 377856 kB' 'Shmem: 2403572 kB' 'KernelStack: 12456 kB' 'PageTables: 5536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219944 kB' 'Slab: 534348 kB' 'SReclaimable: 219944 kB' 'SUnreclaim: 314404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.064 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.065 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656080 kB' 'MemFree: 15870100 kB' 'MemUsed: 11785980 kB' 'SwapCached: 0 kB' 'Active: 4763788 kB' 'Inactive: 3458276 kB' 'Active(anon): 4498412 kB' 'Inactive(anon): 0 kB' 'Active(file): 265376 kB' 'Inactive(file): 3458276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8071828 kB' 'Mapped: 110148 kB' 'AnonPages: 150268 kB' 'Shmem: 4348176 kB' 'KernelStack: 9592 kB' 'PageTables: 2968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 330792 kB' 'Slab: 669208 kB' 'SReclaimable: 330792 kB' 'SUnreclaim: 338416 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.066 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:31.067 node0=512 expecting 512 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:31.067 node1=1024 expecting 1024 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:31.067 00:04:31.067 real 0m4.373s 00:04:31.067 user 0m1.617s 00:04:31.067 sys 0m2.826s 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.067 10:46:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:31.067 ************************************ 00:04:31.067 END TEST custom_alloc 00:04:31.067 ************************************ 00:04:31.326 10:46:38 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:31.326 10:46:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.326 10:46:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.326 10:46:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.326 ************************************ 00:04:31.326 START TEST no_shrink_alloc 00:04:31.326 ************************************ 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.326 10:46:38 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:35.519 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:35.519 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43236052 kB' 'MemAvailable: 47227064 kB' 'Buffers: 6064 kB' 'Cached: 10833780 kB' 'SwapCached: 0 kB' 'Active: 7677392 kB' 'Inactive: 3689560 kB' 'Active(anon): 7278968 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529892 kB' 'Mapped: 215204 kB' 'Shmem: 6751860 kB' 'KReclaimable: 550736 kB' 'Slab: 1203184 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 652448 kB' 'KernelStack: 22016 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8749308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218892 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.519 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.520 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43236948 kB' 'MemAvailable: 47227960 kB' 'Buffers: 6064 kB' 'Cached: 10833784 kB' 'SwapCached: 0 kB' 'Active: 7677892 kB' 'Inactive: 3689560 kB' 'Active(anon): 7279468 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530392 kB' 'Mapped: 215252 kB' 'Shmem: 6751864 kB' 'KReclaimable: 550736 kB' 'Slab: 1203200 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 652464 kB' 'KernelStack: 22160 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8749324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218892 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.521 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.522 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43236612 kB' 'MemAvailable: 47227624 kB' 'Buffers: 6064 kB' 'Cached: 10833804 kB' 'SwapCached: 0 kB' 'Active: 7676616 kB' 'Inactive: 3689560 kB' 'Active(anon): 7278192 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529500 kB' 'Mapped: 215176 kB' 'Shmem: 6751884 kB' 'KReclaimable: 550736 kB' 'Slab: 1203192 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 652456 kB' 'KernelStack: 22000 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8749348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218796 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.523 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.524 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.524 nr_hugepages=1024 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.525 resv_hugepages=0 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.525 surplus_hugepages=0 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.525 anon_hugepages=0 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43233340 kB' 'MemAvailable: 47224352 kB' 'Buffers: 6064 kB' 'Cached: 10833824 kB' 'SwapCached: 0 kB' 'Active: 7677280 kB' 'Inactive: 3689560 kB' 'Active(anon): 7278856 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530156 kB' 'Mapped: 215176 kB' 'Shmem: 6751904 kB' 'KReclaimable: 550736 kB' 'Slab: 1203192 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 652456 kB' 'KernelStack: 22240 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8749368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218956 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.525 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.526 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25269656 kB' 'MemUsed: 7369484 kB' 'SwapCached: 0 kB' 'Active: 2913324 kB' 'Inactive: 231284 kB' 'Active(anon): 2780276 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2768008 kB' 'Mapped: 105024 kB' 'AnonPages: 379676 kB' 'Shmem: 2403676 kB' 'KernelStack: 12648 kB' 'PageTables: 6228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219944 kB' 'Slab: 534024 kB' 'SReclaimable: 219944 kB' 'SUnreclaim: 314080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.527 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:35.528 node0=1024 expecting 1024 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.528 10:46:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:39.721 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:39.721 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.721 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43265424 kB' 'MemAvailable: 47256436 kB' 'Buffers: 6064 kB' 'Cached: 10833932 kB' 'SwapCached: 0 kB' 'Active: 7676996 kB' 'Inactive: 3689560 kB' 'Active(anon): 7278572 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529796 kB' 'Mapped: 215228 kB' 'Shmem: 6752012 kB' 'KReclaimable: 550736 kB' 'Slab: 1203520 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 652784 kB' 'KernelStack: 22032 kB' 'PageTables: 8540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8747392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218764 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.721 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.722 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43265328 kB' 'MemAvailable: 47256340 kB' 'Buffers: 6064 kB' 'Cached: 10833936 kB' 'SwapCached: 0 kB' 'Active: 7677496 kB' 'Inactive: 3689560 kB' 'Active(anon): 7279072 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530348 kB' 'Mapped: 215168 kB' 'Shmem: 6752016 kB' 'KReclaimable: 550736 kB' 'Slab: 1203592 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 652856 kB' 'KernelStack: 22064 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8747412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218716 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.723 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43268964 kB' 'MemAvailable: 47259976 kB' 'Buffers: 6064 kB' 'Cached: 10833936 kB' 'SwapCached: 0 kB' 'Active: 7677188 kB' 'Inactive: 3689560 kB' 'Active(anon): 7278764 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530036 kB' 'Mapped: 215168 kB' 'Shmem: 6752016 kB' 'KReclaimable: 550736 kB' 'Slab: 1203592 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 652856 kB' 'KernelStack: 22064 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8747432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218716 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.724 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.725 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.726 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.987 nr_hugepages=1024 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.987 resv_hugepages=0 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.987 surplus_hugepages=0 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.987 anon_hugepages=0 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295220 kB' 'MemFree: 43270348 kB' 'MemAvailable: 47261360 kB' 'Buffers: 6064 kB' 'Cached: 10833936 kB' 'SwapCached: 0 kB' 'Active: 7677732 kB' 'Inactive: 3689560 kB' 'Active(anon): 7279308 kB' 'Inactive(anon): 0 kB' 'Active(file): 398424 kB' 'Inactive(file): 3689560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530572 kB' 'Mapped: 215168 kB' 'Shmem: 6752016 kB' 'KReclaimable: 550736 kB' 'Slab: 1203592 kB' 'SReclaimable: 550736 kB' 'SUnreclaim: 652856 kB' 'KernelStack: 22080 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487636 kB' 'Committed_AS: 8747456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218716 kB' 'VmallocChunk: 0 kB' 'Percpu: 113344 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3450228 kB' 'DirectMap2M: 19304448 kB' 'DirectMap1G: 46137344 kB' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.987 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.988 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25308940 kB' 'MemUsed: 7330200 kB' 'SwapCached: 0 kB' 'Active: 2912128 kB' 'Inactive: 231284 kB' 'Active(anon): 2779080 kB' 'Inactive(anon): 0 kB' 'Active(file): 133048 kB' 'Inactive(file): 231284 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2768044 kB' 'Mapped: 105028 kB' 'AnonPages: 378504 kB' 'Shmem: 2403712 kB' 'KernelStack: 12440 kB' 'PageTables: 5528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 219944 kB' 'Slab: 534412 kB' 'SReclaimable: 219944 kB' 'SUnreclaim: 314468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.989 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:39.990 node0=1024 expecting 1024 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:39.990 00:04:39.990 real 0m8.667s 00:04:39.990 user 0m3.179s 00:04:39.990 sys 0m5.653s 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.990 10:46:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.990 ************************************ 00:04:39.990 END TEST no_shrink_alloc 00:04:39.990 ************************************ 00:04:39.990 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:39.990 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:39.990 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:39.990 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.990 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.990 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.990 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.990 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:39.990 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.990 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.990 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.990 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.991 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:39.991 10:46:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:39.991 00:04:39.991 real 0m32.945s 00:04:39.991 user 0m11.330s 00:04:39.991 sys 0m20.261s 00:04:39.991 10:46:46 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.991 10:46:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.991 ************************************ 00:04:39.991 END TEST hugepages 00:04:39.991 ************************************ 00:04:39.991 10:46:47 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:39.991 10:46:47 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.991 10:46:47 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.991 10:46:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.991 ************************************ 00:04:39.991 START TEST driver 00:04:39.991 ************************************ 00:04:39.991 10:46:47 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:04:40.249 * Looking for test storage... 00:04:40.249 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:40.249 10:46:47 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:40.249 10:46:47 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.249 10:46:47 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.846 10:46:52 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:46.846 10:46:52 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:46.846 10:46:52 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.846 10:46:52 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:46.846 ************************************ 00:04:46.846 START TEST guess_driver 00:04:46.846 ************************************ 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 256 > 0 )) 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:46.847 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:46.847 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:46.847 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:46.847 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:46.847 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:46.847 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:46.847 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:46.847 Looking for driver=vfio-pci 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.847 10:46:52 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.376 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.635 10:46:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.537 10:46:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:51.537 10:46:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:51.537 10:46:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:51.537 10:46:58 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:51.537 10:46:58 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:51.537 10:46:58 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.537 10:46:58 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:58.100 00:04:58.100 real 0m11.339s 00:04:58.100 user 0m2.877s 00:04:58.100 sys 0m5.796s 00:04:58.100 10:47:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.100 10:47:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:58.100 ************************************ 00:04:58.100 END TEST guess_driver 00:04:58.100 ************************************ 00:04:58.100 00:04:58.100 real 0m17.116s 00:04:58.100 user 0m4.506s 00:04:58.100 sys 0m9.110s 00:04:58.100 10:47:04 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.100 10:47:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:58.100 ************************************ 00:04:58.100 END TEST driver 00:04:58.100 ************************************ 00:04:58.100 10:47:04 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:58.100 10:47:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.100 10:47:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.100 10:47:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:58.100 ************************************ 00:04:58.100 START TEST devices 00:04:58.100 ************************************ 00:04:58.100 10:47:04 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:58.100 * Looking for test storage... 00:04:58.100 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:58.100 10:47:04 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:58.100 10:47:04 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:58.100 10:47:04 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:58.100 10:47:04 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.287 10:47:08 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:02.287 10:47:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:02.287 10:47:08 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:02.287 10:47:08 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:02.287 10:47:08 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:02.287 10:47:08 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:02.287 10:47:08 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:02.287 10:47:08 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:02.287 10:47:08 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:02.287 10:47:08 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:02.287 10:47:08 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:02.287 10:47:08 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:02.287 10:47:08 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:02.287 10:47:08 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:02.287 10:47:08 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:02.287 10:47:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:02.287 10:47:08 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:02.287 10:47:08 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:05:02.287 10:47:08 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:02.287 10:47:08 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:02.287 10:47:08 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:02.287 10:47:08 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:02.287 No valid GPT data, bailing 00:05:02.287 10:47:08 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:02.287 10:47:08 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:02.287 10:47:08 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:02.288 10:47:08 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:02.288 10:47:08 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:02.288 10:47:08 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:02.288 10:47:08 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:05:02.288 10:47:08 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:02.288 10:47:08 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:02.288 10:47:08 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:05:02.288 10:47:08 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:02.288 10:47:08 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:02.288 10:47:08 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:02.288 10:47:08 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.288 10:47:08 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.288 10:47:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:02.288 ************************************ 00:05:02.288 START TEST nvme_mount 00:05:02.288 ************************************ 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:02.288 10:47:08 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:03.221 Creating new GPT entries in memory. 00:05:03.221 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:03.221 other utilities. 00:05:03.221 10:47:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:03.221 10:47:09 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.221 10:47:09 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:03.221 10:47:09 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:03.221 10:47:09 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:04.155 Creating new GPT entries in memory. 00:05:04.155 The operation has completed successfully. 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3441024 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.155 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:04.156 10:47:11 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:04.156 10:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.156 10:47:11 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:08.382 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.382 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.382 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:14 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:08.383 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.383 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:08.383 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:08.383 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.383 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:08.383 10:47:15 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.642 10:47:15 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.827 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.828 10:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.012 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.012 00:05:17.012 real 0m14.952s 00:05:17.012 user 0m4.485s 00:05:17.012 sys 0m8.461s 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.012 10:47:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:17.012 ************************************ 00:05:17.012 END TEST nvme_mount 00:05:17.012 ************************************ 00:05:17.012 10:47:23 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:17.012 10:47:23 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.012 10:47:23 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.012 10:47:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:17.012 ************************************ 00:05:17.012 START TEST dm_mount 00:05:17.012 ************************************ 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.012 10:47:23 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:17.012 10:47:24 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.012 10:47:24 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:17.012 10:47:24 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:17.012 10:47:24 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:17.946 Creating new GPT entries in memory. 00:05:17.946 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:17.946 other utilities. 00:05:17.946 10:47:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:17.946 10:47:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:17.946 10:47:25 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:17.946 10:47:25 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:17.946 10:47:25 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:19.326 Creating new GPT entries in memory. 00:05:19.326 The operation has completed successfully. 00:05:19.326 10:47:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:19.326 10:47:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.326 10:47:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.326 10:47:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.326 10:47:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:20.262 The operation has completed successfully. 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3446378 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.262 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:20.263 10:47:27 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:20.263 10:47:27 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.263 10:47:27 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.523 10:47:31 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:28.725 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:28.725 00:05:28.725 real 0m11.373s 00:05:28.725 user 0m2.918s 00:05:28.725 sys 0m5.574s 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.725 10:47:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:28.725 ************************************ 00:05:28.725 END TEST dm_mount 00:05:28.725 ************************************ 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:28.725 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:28.725 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:28.725 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:28.725 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.725 10:47:35 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:28.725 00:05:28.725 real 0m31.448s 00:05:28.725 user 0m9.155s 00:05:28.725 sys 0m17.342s 00:05:28.725 10:47:35 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.725 10:47:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:28.725 ************************************ 00:05:28.725 END TEST devices 00:05:28.725 ************************************ 00:05:28.725 00:05:28.725 real 1m50.457s 00:05:28.725 user 0m33.923s 00:05:28.725 sys 1m4.351s 00:05:28.725 10:47:35 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.725 10:47:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:28.725 ************************************ 00:05:28.725 END TEST setup.sh 00:05:28.725 ************************************ 00:05:28.725 10:47:35 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:05:32.969 Hugepages 00:05:32.969 node hugesize free / total 00:05:32.969 node0 1048576kB 0 / 0 00:05:32.969 node0 2048kB 1024 / 1024 00:05:32.969 node1 1048576kB 0 / 0 00:05:32.969 node1 2048kB 1024 / 1024 00:05:32.969 00:05:32.969 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:32.969 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:32.969 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:32.969 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:32.969 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:32.969 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:32.969 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:32.969 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:32.969 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:32.969 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:32.969 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:32.969 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:32.969 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:32.969 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:32.969 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:32.969 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:32.969 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:32.969 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:33.227 10:47:40 -- spdk/autotest.sh@130 -- # uname -s 00:05:33.227 10:47:40 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:33.227 10:47:40 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:33.227 10:47:40 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:37.409 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:37.409 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:39.308 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:39.308 10:47:46 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:40.290 10:47:47 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:40.290 10:47:47 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:40.290 10:47:47 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:40.290 10:47:47 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:40.290 10:47:47 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:40.290 10:47:47 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:40.290 10:47:47 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:40.290 10:47:47 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:40.290 10:47:47 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:40.290 10:47:47 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:40.290 10:47:47 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:40.290 10:47:47 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:05:44.468 Waiting for block devices as requested 00:05:44.468 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:44.468 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:44.468 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:44.725 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:44.725 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:44.725 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:44.983 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:44.983 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:44.983 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:45.242 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:45.242 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:45.242 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:45.501 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:45.501 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:45.501 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:45.758 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:45.759 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:46.017 10:47:52 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:46.017 10:47:52 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:46.017 10:47:52 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:05:46.017 10:47:52 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:46.017 10:47:52 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:46.017 10:47:52 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:46.017 10:47:52 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:46.017 10:47:52 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:46.017 10:47:52 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:46.017 10:47:52 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:46.017 10:47:52 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:46.017 10:47:52 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:46.017 10:47:52 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:46.017 10:47:52 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:46.017 10:47:52 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:46.017 10:47:52 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:46.017 10:47:52 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:46.017 10:47:52 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:46.017 10:47:52 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:46.017 10:47:52 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:46.017 10:47:52 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:46.017 10:47:52 -- common/autotest_common.sh@1557 -- # continue 00:05:46.017 10:47:52 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:46.017 10:47:52 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.017 10:47:52 -- common/autotest_common.sh@10 -- # set +x 00:05:46.017 10:47:53 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:46.017 10:47:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.017 10:47:53 -- common/autotest_common.sh@10 -- # set +x 00:05:46.017 10:47:53 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:05:50.204 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:50.204 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:50.204 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:50.204 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:50.204 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:50.204 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:50.204 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:50.204 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:50.204 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:50.204 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:50.204 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:50.204 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:50.205 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:50.205 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:50.205 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:50.205 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:52.110 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:52.110 10:47:59 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:52.110 10:47:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.110 10:47:59 -- common/autotest_common.sh@10 -- # set +x 00:05:52.400 10:47:59 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:52.400 10:47:59 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:52.400 10:47:59 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:52.400 10:47:59 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:52.400 10:47:59 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:52.400 10:47:59 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:52.400 10:47:59 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:52.400 10:47:59 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:52.400 10:47:59 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:52.400 10:47:59 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:52.400 10:47:59 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:52.400 10:47:59 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:52.400 10:47:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:52.400 10:47:59 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:52.400 10:47:59 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:52.400 10:47:59 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:52.400 10:47:59 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:52.400 10:47:59 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:52.400 10:47:59 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:05:52.400 10:47:59 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:05:52.400 10:47:59 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3457979 00:05:52.400 10:47:59 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.400 10:47:59 -- common/autotest_common.sh@1598 -- # waitforlisten 3457979 00:05:52.400 10:47:59 -- common/autotest_common.sh@831 -- # '[' -z 3457979 ']' 00:05:52.401 10:47:59 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.401 10:47:59 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:52.401 10:47:59 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.401 10:47:59 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:52.401 10:47:59 -- common/autotest_common.sh@10 -- # set +x 00:05:52.401 [2024-07-25 10:47:59.513091] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:52.401 [2024-07-25 10:47:59.513220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457979 ] 00:05:52.659 [2024-07-25 10:47:59.720321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.918 [2024-07-25 10:47:59.997948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.294 10:48:01 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.294 10:48:01 -- common/autotest_common.sh@864 -- # return 0 00:05:54.294 10:48:01 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:54.294 10:48:01 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:54.294 10:48:01 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:57.577 nvme0n1 00:05:57.577 10:48:04 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:57.577 [2024-07-25 10:48:04.521488] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:57.577 request: 00:05:57.577 { 00:05:57.577 "nvme_ctrlr_name": "nvme0", 00:05:57.577 "password": "test", 00:05:57.577 "method": "bdev_nvme_opal_revert", 00:05:57.577 "req_id": 1 00:05:57.577 } 00:05:57.577 Got JSON-RPC error response 00:05:57.577 response: 00:05:57.577 { 00:05:57.577 "code": -32602, 00:05:57.577 "message": "Invalid parameters" 00:05:57.577 } 00:05:57.577 10:48:04 -- common/autotest_common.sh@1604 -- # true 00:05:57.577 10:48:04 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:57.577 10:48:04 -- common/autotest_common.sh@1608 -- # killprocess 3457979 00:05:57.577 10:48:04 -- common/autotest_common.sh@950 -- # '[' -z 3457979 ']' 00:05:57.577 10:48:04 -- common/autotest_common.sh@954 -- # kill -0 3457979 00:05:57.577 10:48:04 -- common/autotest_common.sh@955 -- # uname 00:05:57.577 10:48:04 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.577 10:48:04 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3457979 00:05:57.577 10:48:04 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.577 10:48:04 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.577 10:48:04 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3457979' 00:05:57.577 killing process with pid 3457979 00:05:57.577 10:48:04 -- common/autotest_common.sh@969 -- # kill 3457979 00:05:57.577 10:48:04 -- common/autotest_common.sh@974 -- # wait 3457979 00:06:02.842 10:48:09 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:02.842 10:48:09 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:02.842 10:48:09 -- spdk/autotest.sh@155 -- # [[ 1 -eq 1 ]] 00:06:02.842 10:48:09 -- spdk/autotest.sh@156 -- # [[ 0 -eq 1 ]] 00:06:02.842 10:48:09 -- spdk/autotest.sh@159 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:06:03.409 Restarting all devices. 00:06:09.973 lstat() error: No such file or directory 00:06:09.973 QAT Error: No GENERAL section found 00:06:09.973 Failed to configure qat_dev0 00:06:09.973 lstat() error: No such file or directory 00:06:09.973 QAT Error: No GENERAL section found 00:06:09.973 Failed to configure qat_dev1 00:06:09.973 lstat() error: No such file or directory 00:06:09.973 QAT Error: No GENERAL section found 00:06:09.973 Failed to configure qat_dev2 00:06:09.973 lstat() error: No such file or directory 00:06:09.973 QAT Error: No GENERAL section found 00:06:09.973 Failed to configure qat_dev3 00:06:09.973 lstat() error: No such file or directory 00:06:09.973 QAT Error: No GENERAL section found 00:06:09.973 Failed to configure qat_dev4 00:06:09.973 enable sriov 00:06:09.973 Checking status of all devices. 00:06:09.973 There is 5 QAT acceleration device(s) in the system: 00:06:09.973 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:1a:00.0, #accel: 5 #engines: 10 state: down 00:06:09.973 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:1c:00.0, #accel: 5 #engines: 10 state: down 00:06:09.973 qat_dev2 - type: c6xx, inst_id: 2, node_id: 0, bsf: 0000:1e:00.0, #accel: 5 #engines: 10 state: down 00:06:09.973 qat_dev3 - type: c6xx, inst_id: 3, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:06:09.973 qat_dev4 - type: c6xx, inst_id: 4, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:06:09.973 0000:1a:00.0 set to 16 VFs 00:06:10.906 0000:1c:00.0 set to 16 VFs 00:06:11.839 0000:1e:00.0 set to 16 VFs 00:06:12.404 0000:3d:00.0 set to 16 VFs 00:06:13.336 0000:3f:00.0 set to 16 VFs 00:06:15.926 Properly configured the qat device with driver uio_pci_generic. 00:06:15.926 10:48:22 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:15.926 10:48:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.926 10:48:22 -- common/autotest_common.sh@10 -- # set +x 00:06:15.926 10:48:22 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:15.926 10:48:22 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:06:15.926 10:48:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.926 10:48:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.926 10:48:22 -- common/autotest_common.sh@10 -- # set +x 00:06:15.926 ************************************ 00:06:15.926 START TEST env 00:06:15.926 ************************************ 00:06:15.926 10:48:22 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:06:15.926 * Looking for test storage... 00:06:15.926 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:06:15.926 10:48:22 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:06:15.926 10:48:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.926 10:48:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.926 10:48:22 env -- common/autotest_common.sh@10 -- # set +x 00:06:15.926 ************************************ 00:06:15.926 START TEST env_memory 00:06:15.926 ************************************ 00:06:15.926 10:48:22 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:06:15.926 00:06:15.926 00:06:15.926 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.926 http://cunit.sourceforge.net/ 00:06:15.926 00:06:15.926 00:06:15.926 Suite: memory 00:06:15.926 Test: alloc and free memory map ...[2024-07-25 10:48:22.887585] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:15.926 passed 00:06:15.926 Test: mem map translation ...[2024-07-25 10:48:22.941269] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:15.926 [2024-07-25 10:48:22.941310] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:15.926 [2024-07-25 10:48:22.941393] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:15.926 [2024-07-25 10:48:22.941421] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:15.926 passed 00:06:15.926 Test: mem map registration ...[2024-07-25 10:48:23.025089] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:15.926 [2024-07-25 10:48:23.025123] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:16.185 passed 00:06:16.185 Test: mem map adjacent registrations ...passed 00:06:16.185 00:06:16.185 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.185 suites 1 1 n/a 0 0 00:06:16.185 tests 4 4 4 0 0 00:06:16.185 asserts 152 152 152 0 n/a 00:06:16.185 00:06:16.185 Elapsed time = 0.297 seconds 00:06:16.185 00:06:16.185 real 0m0.338s 00:06:16.185 user 0m0.300s 00:06:16.185 sys 0m0.037s 00:06:16.185 10:48:23 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.185 10:48:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:16.185 ************************************ 00:06:16.185 END TEST env_memory 00:06:16.185 ************************************ 00:06:16.185 10:48:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:16.185 10:48:23 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.185 10:48:23 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.185 10:48:23 env -- common/autotest_common.sh@10 -- # set +x 00:06:16.185 ************************************ 00:06:16.185 START TEST env_vtophys 00:06:16.185 ************************************ 00:06:16.185 10:48:23 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:16.185 EAL: lib.eal log level changed from notice to debug 00:06:16.185 EAL: Detected lcore 0 as core 0 on socket 0 00:06:16.185 EAL: Detected lcore 1 as core 1 on socket 0 00:06:16.185 EAL: Detected lcore 2 as core 2 on socket 0 00:06:16.185 EAL: Detected lcore 3 as core 3 on socket 0 00:06:16.185 EAL: Detected lcore 4 as core 4 on socket 0 00:06:16.185 EAL: Detected lcore 5 as core 5 on socket 0 00:06:16.185 EAL: Detected lcore 6 as core 6 on socket 0 00:06:16.185 EAL: Detected lcore 7 as core 8 on socket 0 00:06:16.185 EAL: Detected lcore 8 as core 9 on socket 0 00:06:16.185 EAL: Detected lcore 9 as core 10 on socket 0 00:06:16.185 EAL: Detected lcore 10 as core 11 on socket 0 00:06:16.185 EAL: Detected lcore 11 as core 12 on socket 0 00:06:16.185 EAL: Detected lcore 12 as core 13 on socket 0 00:06:16.185 EAL: Detected lcore 13 as core 14 on socket 0 00:06:16.185 EAL: Detected lcore 14 as core 16 on socket 0 00:06:16.185 EAL: Detected lcore 15 as core 17 on socket 0 00:06:16.185 EAL: Detected lcore 16 as core 18 on socket 0 00:06:16.185 EAL: Detected lcore 17 as core 19 on socket 0 00:06:16.185 EAL: Detected lcore 18 as core 20 on socket 0 00:06:16.185 EAL: Detected lcore 19 as core 21 on socket 0 00:06:16.185 EAL: Detected lcore 20 as core 22 on socket 0 00:06:16.185 EAL: Detected lcore 21 as core 24 on socket 0 00:06:16.185 EAL: Detected lcore 22 as core 25 on socket 0 00:06:16.185 EAL: Detected lcore 23 as core 26 on socket 0 00:06:16.185 EAL: Detected lcore 24 as core 27 on socket 0 00:06:16.185 EAL: Detected lcore 25 as core 28 on socket 0 00:06:16.185 EAL: Detected lcore 26 as core 29 on socket 0 00:06:16.185 EAL: Detected lcore 27 as core 30 on socket 0 00:06:16.185 EAL: Detected lcore 28 as core 0 on socket 1 00:06:16.185 EAL: Detected lcore 29 as core 1 on socket 1 00:06:16.185 EAL: Detected lcore 30 as core 2 on socket 1 00:06:16.185 EAL: Detected lcore 31 as core 3 on socket 1 00:06:16.185 EAL: Detected lcore 32 as core 4 on socket 1 00:06:16.185 EAL: Detected lcore 33 as core 5 on socket 1 00:06:16.185 EAL: Detected lcore 34 as core 6 on socket 1 00:06:16.185 EAL: Detected lcore 35 as core 8 on socket 1 00:06:16.185 EAL: Detected lcore 36 as core 9 on socket 1 00:06:16.185 EAL: Detected lcore 37 as core 10 on socket 1 00:06:16.185 EAL: Detected lcore 38 as core 11 on socket 1 00:06:16.185 EAL: Detected lcore 39 as core 12 on socket 1 00:06:16.185 EAL: Detected lcore 40 as core 13 on socket 1 00:06:16.185 EAL: Detected lcore 41 as core 14 on socket 1 00:06:16.185 EAL: Detected lcore 42 as core 16 on socket 1 00:06:16.185 EAL: Detected lcore 43 as core 17 on socket 1 00:06:16.185 EAL: Detected lcore 44 as core 18 on socket 1 00:06:16.185 EAL: Detected lcore 45 as core 19 on socket 1 00:06:16.185 EAL: Detected lcore 46 as core 20 on socket 1 00:06:16.185 EAL: Detected lcore 47 as core 21 on socket 1 00:06:16.185 EAL: Detected lcore 48 as core 22 on socket 1 00:06:16.185 EAL: Detected lcore 49 as core 24 on socket 1 00:06:16.185 EAL: Detected lcore 50 as core 25 on socket 1 00:06:16.185 EAL: Detected lcore 51 as core 26 on socket 1 00:06:16.185 EAL: Detected lcore 52 as core 27 on socket 1 00:06:16.185 EAL: Detected lcore 53 as core 28 on socket 1 00:06:16.185 EAL: Detected lcore 54 as core 29 on socket 1 00:06:16.185 EAL: Detected lcore 55 as core 30 on socket 1 00:06:16.185 EAL: Detected lcore 56 as core 0 on socket 0 00:06:16.185 EAL: Detected lcore 57 as core 1 on socket 0 00:06:16.185 EAL: Detected lcore 58 as core 2 on socket 0 00:06:16.185 EAL: Detected lcore 59 as core 3 on socket 0 00:06:16.185 EAL: Detected lcore 60 as core 4 on socket 0 00:06:16.185 EAL: Detected lcore 61 as core 5 on socket 0 00:06:16.185 EAL: Detected lcore 62 as core 6 on socket 0 00:06:16.185 EAL: Detected lcore 63 as core 8 on socket 0 00:06:16.185 EAL: Detected lcore 64 as core 9 on socket 0 00:06:16.185 EAL: Detected lcore 65 as core 10 on socket 0 00:06:16.185 EAL: Detected lcore 66 as core 11 on socket 0 00:06:16.185 EAL: Detected lcore 67 as core 12 on socket 0 00:06:16.185 EAL: Detected lcore 68 as core 13 on socket 0 00:06:16.185 EAL: Detected lcore 69 as core 14 on socket 0 00:06:16.185 EAL: Detected lcore 70 as core 16 on socket 0 00:06:16.185 EAL: Detected lcore 71 as core 17 on socket 0 00:06:16.185 EAL: Detected lcore 72 as core 18 on socket 0 00:06:16.185 EAL: Detected lcore 73 as core 19 on socket 0 00:06:16.185 EAL: Detected lcore 74 as core 20 on socket 0 00:06:16.185 EAL: Detected lcore 75 as core 21 on socket 0 00:06:16.185 EAL: Detected lcore 76 as core 22 on socket 0 00:06:16.185 EAL: Detected lcore 77 as core 24 on socket 0 00:06:16.185 EAL: Detected lcore 78 as core 25 on socket 0 00:06:16.185 EAL: Detected lcore 79 as core 26 on socket 0 00:06:16.185 EAL: Detected lcore 80 as core 27 on socket 0 00:06:16.185 EAL: Detected lcore 81 as core 28 on socket 0 00:06:16.185 EAL: Detected lcore 82 as core 29 on socket 0 00:06:16.185 EAL: Detected lcore 83 as core 30 on socket 0 00:06:16.185 EAL: Detected lcore 84 as core 0 on socket 1 00:06:16.185 EAL: Detected lcore 85 as core 1 on socket 1 00:06:16.185 EAL: Detected lcore 86 as core 2 on socket 1 00:06:16.185 EAL: Detected lcore 87 as core 3 on socket 1 00:06:16.185 EAL: Detected lcore 88 as core 4 on socket 1 00:06:16.185 EAL: Detected lcore 89 as core 5 on socket 1 00:06:16.185 EAL: Detected lcore 90 as core 6 on socket 1 00:06:16.185 EAL: Detected lcore 91 as core 8 on socket 1 00:06:16.185 EAL: Detected lcore 92 as core 9 on socket 1 00:06:16.185 EAL: Detected lcore 93 as core 10 on socket 1 00:06:16.185 EAL: Detected lcore 94 as core 11 on socket 1 00:06:16.185 EAL: Detected lcore 95 as core 12 on socket 1 00:06:16.185 EAL: Detected lcore 96 as core 13 on socket 1 00:06:16.185 EAL: Detected lcore 97 as core 14 on socket 1 00:06:16.185 EAL: Detected lcore 98 as core 16 on socket 1 00:06:16.185 EAL: Detected lcore 99 as core 17 on socket 1 00:06:16.185 EAL: Detected lcore 100 as core 18 on socket 1 00:06:16.185 EAL: Detected lcore 101 as core 19 on socket 1 00:06:16.185 EAL: Detected lcore 102 as core 20 on socket 1 00:06:16.185 EAL: Detected lcore 103 as core 21 on socket 1 00:06:16.185 EAL: Detected lcore 104 as core 22 on socket 1 00:06:16.185 EAL: Detected lcore 105 as core 24 on socket 1 00:06:16.185 EAL: Detected lcore 106 as core 25 on socket 1 00:06:16.185 EAL: Detected lcore 107 as core 26 on socket 1 00:06:16.185 EAL: Detected lcore 108 as core 27 on socket 1 00:06:16.185 EAL: Detected lcore 109 as core 28 on socket 1 00:06:16.185 EAL: Detected lcore 110 as core 29 on socket 1 00:06:16.185 EAL: Detected lcore 111 as core 30 on socket 1 00:06:16.444 EAL: Maximum logical cores by configuration: 128 00:06:16.444 EAL: Detected CPU lcores: 112 00:06:16.444 EAL: Detected NUMA nodes: 2 00:06:16.444 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:16.444 EAL: Detected shared linkage of DPDK 00:06:16.444 EAL: No shared files mode enabled, IPC will be disabled 00:06:16.444 EAL: No shared files mode enabled, IPC is disabled 00:06:16.444 EAL: PCI driver qat for device 0000:1a:01.0 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:01.1 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:01.2 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:01.3 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:01.4 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:01.5 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:01.6 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:01.7 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:02.0 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:02.1 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:02.2 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:02.3 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:02.4 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:02.5 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:02.6 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1a:02.7 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:01.0 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:01.1 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:01.2 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:01.3 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:01.4 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:01.5 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:01.6 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:01.7 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:02.0 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:02.1 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:02.2 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:02.3 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:02.4 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:02.5 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:02.6 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1c:02.7 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:01.0 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:01.1 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:01.2 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:01.3 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:01.4 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:01.5 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:01.6 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:01.7 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:02.0 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:02.1 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:02.2 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:02.3 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:02.4 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:02.5 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:02.6 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:1e:02.7 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:06:16.444 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:06:16.444 EAL: Bus pci wants IOVA as 'PA' 00:06:16.444 EAL: Bus auxiliary wants IOVA as 'DC' 00:06:16.444 EAL: Bus vdev wants IOVA as 'DC' 00:06:16.444 EAL: Selected IOVA mode 'PA' 00:06:16.444 EAL: Probing VFIO support... 00:06:16.444 EAL: IOMMU type 1 (Type 1) is supported 00:06:16.444 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:16.444 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:16.444 EAL: VFIO support initialized 00:06:16.444 EAL: Ask a virtual area of 0x2e000 bytes 00:06:16.444 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:16.444 EAL: Setting up physically contiguous memory... 00:06:16.444 EAL: Setting maximum number of open files to 524288 00:06:16.444 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:16.444 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:16.444 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:16.444 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.444 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:16.444 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.444 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.444 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:16.444 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:16.444 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.444 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:16.444 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.444 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.444 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:16.444 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:16.444 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.444 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:16.444 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.444 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.444 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:16.444 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:16.444 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.444 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:16.444 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:16.444 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.444 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:16.444 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:16.444 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:16.444 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.444 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:16.444 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:16.444 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.444 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:16.444 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:16.444 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.444 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:16.444 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:16.444 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.444 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:16.444 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:16.444 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.444 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:16.444 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:16.444 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.444 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:16.444 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:16.444 EAL: Ask a virtual area of 0x61000 bytes 00:06:16.444 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:16.444 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:16.444 EAL: Ask a virtual area of 0x400000000 bytes 00:06:16.444 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:16.444 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:16.444 EAL: Hugepages will be freed exactly as allocated. 00:06:16.444 EAL: No shared files mode enabled, IPC is disabled 00:06:16.444 EAL: No shared files mode enabled, IPC is disabled 00:06:16.444 EAL: TSC frequency is ~2500000 KHz 00:06:16.444 EAL: Main lcore 0 is ready (tid=7f996500cb40;cpuset=[0]) 00:06:16.444 EAL: Trying to obtain current memory policy. 00:06:16.444 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:16.444 EAL: Restoring previous memory policy: 0 00:06:16.444 EAL: request: mp_malloc_sync 00:06:16.444 EAL: No shared files mode enabled, IPC is disabled 00:06:16.444 EAL: Heap on socket 0 was expanded by 2MB 00:06:16.444 EAL: PCI device 0000:1a:01.0 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001000000 00:06:16.444 EAL: PCI memory mapped at 0x202001001000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:01.1 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001002000 00:06:16.444 EAL: PCI memory mapped at 0x202001003000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:01.2 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001004000 00:06:16.444 EAL: PCI memory mapped at 0x202001005000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:01.3 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001006000 00:06:16.444 EAL: PCI memory mapped at 0x202001007000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:01.4 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001008000 00:06:16.444 EAL: PCI memory mapped at 0x202001009000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:01.5 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x20200100a000 00:06:16.444 EAL: PCI memory mapped at 0x20200100b000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:01.6 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x20200100c000 00:06:16.444 EAL: PCI memory mapped at 0x20200100d000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:01.7 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x20200100e000 00:06:16.444 EAL: PCI memory mapped at 0x20200100f000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:02.0 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001010000 00:06:16.444 EAL: PCI memory mapped at 0x202001011000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:02.1 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001012000 00:06:16.444 EAL: PCI memory mapped at 0x202001013000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:02.2 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001014000 00:06:16.444 EAL: PCI memory mapped at 0x202001015000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:02.3 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001016000 00:06:16.444 EAL: PCI memory mapped at 0x202001017000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:02.4 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001018000 00:06:16.444 EAL: PCI memory mapped at 0x202001019000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:02.5 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x20200101a000 00:06:16.444 EAL: PCI memory mapped at 0x20200101b000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:02.6 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x20200101c000 00:06:16.444 EAL: PCI memory mapped at 0x20200101d000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:16.444 EAL: PCI device 0000:1a:02.7 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x20200101e000 00:06:16.444 EAL: PCI memory mapped at 0x20200101f000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:16.444 EAL: PCI device 0000:1c:01.0 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001020000 00:06:16.444 EAL: PCI memory mapped at 0x202001021000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:16.444 EAL: PCI device 0000:1c:01.1 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001022000 00:06:16.444 EAL: PCI memory mapped at 0x202001023000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:16.444 EAL: PCI device 0000:1c:01.2 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001024000 00:06:16.444 EAL: PCI memory mapped at 0x202001025000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:16.444 EAL: PCI device 0000:1c:01.3 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001026000 00:06:16.444 EAL: PCI memory mapped at 0x202001027000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:16.444 EAL: PCI device 0000:1c:01.4 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001028000 00:06:16.444 EAL: PCI memory mapped at 0x202001029000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:16.444 EAL: PCI device 0000:1c:01.5 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x20200102a000 00:06:16.444 EAL: PCI memory mapped at 0x20200102b000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:16.444 EAL: PCI device 0000:1c:01.6 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x20200102c000 00:06:16.444 EAL: PCI memory mapped at 0x20200102d000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:16.444 EAL: PCI device 0000:1c:01.7 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x20200102e000 00:06:16.444 EAL: PCI memory mapped at 0x20200102f000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:16.444 EAL: PCI device 0000:1c:02.0 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001030000 00:06:16.444 EAL: PCI memory mapped at 0x202001031000 00:06:16.444 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:16.444 EAL: PCI device 0000:1c:02.1 on NUMA socket 0 00:06:16.444 EAL: probe driver: 8086:37c9 qat 00:06:16.444 EAL: PCI memory mapped at 0x202001032000 00:06:16.445 EAL: PCI memory mapped at 0x202001033000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:16.445 EAL: PCI device 0000:1c:02.2 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001034000 00:06:16.445 EAL: PCI memory mapped at 0x202001035000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:16.445 EAL: PCI device 0000:1c:02.3 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001036000 00:06:16.445 EAL: PCI memory mapped at 0x202001037000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:16.445 EAL: PCI device 0000:1c:02.4 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001038000 00:06:16.445 EAL: PCI memory mapped at 0x202001039000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:16.445 EAL: PCI device 0000:1c:02.5 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200103a000 00:06:16.445 EAL: PCI memory mapped at 0x20200103b000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:16.445 EAL: PCI device 0000:1c:02.6 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200103c000 00:06:16.445 EAL: PCI memory mapped at 0x20200103d000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:16.445 EAL: PCI device 0000:1c:02.7 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200103e000 00:06:16.445 EAL: PCI memory mapped at 0x20200103f000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:01.0 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001040000 00:06:16.445 EAL: PCI memory mapped at 0x202001041000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:01.1 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001042000 00:06:16.445 EAL: PCI memory mapped at 0x202001043000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:01.2 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001044000 00:06:16.445 EAL: PCI memory mapped at 0x202001045000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:01.3 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001046000 00:06:16.445 EAL: PCI memory mapped at 0x202001047000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:01.4 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001048000 00:06:16.445 EAL: PCI memory mapped at 0x202001049000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:01.5 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200104a000 00:06:16.445 EAL: PCI memory mapped at 0x20200104b000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:01.6 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200104c000 00:06:16.445 EAL: PCI memory mapped at 0x20200104d000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:01.7 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200104e000 00:06:16.445 EAL: PCI memory mapped at 0x20200104f000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:02.0 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001050000 00:06:16.445 EAL: PCI memory mapped at 0x202001051000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:02.1 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001052000 00:06:16.445 EAL: PCI memory mapped at 0x202001053000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:02.2 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001054000 00:06:16.445 EAL: PCI memory mapped at 0x202001055000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:02.3 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001056000 00:06:16.445 EAL: PCI memory mapped at 0x202001057000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:02.4 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001058000 00:06:16.445 EAL: PCI memory mapped at 0x202001059000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:02.5 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200105a000 00:06:16.445 EAL: PCI memory mapped at 0x20200105b000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:02.6 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200105c000 00:06:16.445 EAL: PCI memory mapped at 0x20200105d000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:16.445 EAL: PCI device 0000:1e:02.7 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200105e000 00:06:16.445 EAL: PCI memory mapped at 0x20200105f000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:16.445 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001060000 00:06:16.445 EAL: PCI memory mapped at 0x202001061000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001060000 00:06:16.445 EAL: PCI memory unmapped at 0x202001061000 00:06:16.445 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001062000 00:06:16.445 EAL: PCI memory mapped at 0x202001063000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001062000 00:06:16.445 EAL: PCI memory unmapped at 0x202001063000 00:06:16.445 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001064000 00:06:16.445 EAL: PCI memory mapped at 0x202001065000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001064000 00:06:16.445 EAL: PCI memory unmapped at 0x202001065000 00:06:16.445 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001066000 00:06:16.445 EAL: PCI memory mapped at 0x202001067000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001066000 00:06:16.445 EAL: PCI memory unmapped at 0x202001067000 00:06:16.445 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001068000 00:06:16.445 EAL: PCI memory mapped at 0x202001069000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001068000 00:06:16.445 EAL: PCI memory unmapped at 0x202001069000 00:06:16.445 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200106a000 00:06:16.445 EAL: PCI memory mapped at 0x20200106b000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x20200106a000 00:06:16.445 EAL: PCI memory unmapped at 0x20200106b000 00:06:16.445 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200106c000 00:06:16.445 EAL: PCI memory mapped at 0x20200106d000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x20200106c000 00:06:16.445 EAL: PCI memory unmapped at 0x20200106d000 00:06:16.445 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200106e000 00:06:16.445 EAL: PCI memory mapped at 0x20200106f000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x20200106e000 00:06:16.445 EAL: PCI memory unmapped at 0x20200106f000 00:06:16.445 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001070000 00:06:16.445 EAL: PCI memory mapped at 0x202001071000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001070000 00:06:16.445 EAL: PCI memory unmapped at 0x202001071000 00:06:16.445 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001072000 00:06:16.445 EAL: PCI memory mapped at 0x202001073000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001072000 00:06:16.445 EAL: PCI memory unmapped at 0x202001073000 00:06:16.445 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001074000 00:06:16.445 EAL: PCI memory mapped at 0x202001075000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001074000 00:06:16.445 EAL: PCI memory unmapped at 0x202001075000 00:06:16.445 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001076000 00:06:16.445 EAL: PCI memory mapped at 0x202001077000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001076000 00:06:16.445 EAL: PCI memory unmapped at 0x202001077000 00:06:16.445 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001078000 00:06:16.445 EAL: PCI memory mapped at 0x202001079000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001078000 00:06:16.445 EAL: PCI memory unmapped at 0x202001079000 00:06:16.445 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200107a000 00:06:16.445 EAL: PCI memory mapped at 0x20200107b000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x20200107a000 00:06:16.445 EAL: PCI memory unmapped at 0x20200107b000 00:06:16.445 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200107c000 00:06:16.445 EAL: PCI memory mapped at 0x20200107d000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x20200107c000 00:06:16.445 EAL: PCI memory unmapped at 0x20200107d000 00:06:16.445 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:16.445 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200107e000 00:06:16.445 EAL: PCI memory mapped at 0x20200107f000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x20200107e000 00:06:16.445 EAL: PCI memory unmapped at 0x20200107f000 00:06:16.445 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:16.445 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001080000 00:06:16.445 EAL: PCI memory mapped at 0x202001081000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001080000 00:06:16.445 EAL: PCI memory unmapped at 0x202001081000 00:06:16.445 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:16.445 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001082000 00:06:16.445 EAL: PCI memory mapped at 0x202001083000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001082000 00:06:16.445 EAL: PCI memory unmapped at 0x202001083000 00:06:16.445 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:16.445 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001084000 00:06:16.445 EAL: PCI memory mapped at 0x202001085000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001084000 00:06:16.445 EAL: PCI memory unmapped at 0x202001085000 00:06:16.445 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:16.445 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001086000 00:06:16.445 EAL: PCI memory mapped at 0x202001087000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001086000 00:06:16.445 EAL: PCI memory unmapped at 0x202001087000 00:06:16.445 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:16.445 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001088000 00:06:16.445 EAL: PCI memory mapped at 0x202001089000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x202001088000 00:06:16.445 EAL: PCI memory unmapped at 0x202001089000 00:06:16.445 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:16.445 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200108a000 00:06:16.445 EAL: PCI memory mapped at 0x20200108b000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x20200108a000 00:06:16.445 EAL: PCI memory unmapped at 0x20200108b000 00:06:16.445 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:16.445 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200108c000 00:06:16.445 EAL: PCI memory mapped at 0x20200108d000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x20200108c000 00:06:16.445 EAL: PCI memory unmapped at 0x20200108d000 00:06:16.445 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:16.445 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x20200108e000 00:06:16.445 EAL: PCI memory mapped at 0x20200108f000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.445 EAL: PCI memory unmapped at 0x20200108e000 00:06:16.445 EAL: PCI memory unmapped at 0x20200108f000 00:06:16.445 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:16.445 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:06:16.445 EAL: probe driver: 8086:37c9 qat 00:06:16.445 EAL: PCI memory mapped at 0x202001090000 00:06:16.445 EAL: PCI memory mapped at 0x202001091000 00:06:16.445 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:16.445 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.446 EAL: PCI memory unmapped at 0x202001090000 00:06:16.446 EAL: PCI memory unmapped at 0x202001091000 00:06:16.446 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:16.446 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:06:16.446 EAL: probe driver: 8086:37c9 qat 00:06:16.446 EAL: PCI memory mapped at 0x202001092000 00:06:16.446 EAL: PCI memory mapped at 0x202001093000 00:06:16.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:16.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.446 EAL: PCI memory unmapped at 0x202001092000 00:06:16.446 EAL: PCI memory unmapped at 0x202001093000 00:06:16.446 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:16.446 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:06:16.446 EAL: probe driver: 8086:37c9 qat 00:06:16.446 EAL: PCI memory mapped at 0x202001094000 00:06:16.446 EAL: PCI memory mapped at 0x202001095000 00:06:16.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:16.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.446 EAL: PCI memory unmapped at 0x202001094000 00:06:16.446 EAL: PCI memory unmapped at 0x202001095000 00:06:16.446 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:16.446 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:06:16.446 EAL: probe driver: 8086:37c9 qat 00:06:16.446 EAL: PCI memory mapped at 0x202001096000 00:06:16.446 EAL: PCI memory mapped at 0x202001097000 00:06:16.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:16.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.446 EAL: PCI memory unmapped at 0x202001096000 00:06:16.446 EAL: PCI memory unmapped at 0x202001097000 00:06:16.446 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:16.446 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:06:16.446 EAL: probe driver: 8086:37c9 qat 00:06:16.446 EAL: PCI memory mapped at 0x202001098000 00:06:16.446 EAL: PCI memory mapped at 0x202001099000 00:06:16.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:16.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.446 EAL: PCI memory unmapped at 0x202001098000 00:06:16.446 EAL: PCI memory unmapped at 0x202001099000 00:06:16.446 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:16.446 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:06:16.446 EAL: probe driver: 8086:37c9 qat 00:06:16.446 EAL: PCI memory mapped at 0x20200109a000 00:06:16.446 EAL: PCI memory mapped at 0x20200109b000 00:06:16.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:16.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.446 EAL: PCI memory unmapped at 0x20200109a000 00:06:16.446 EAL: PCI memory unmapped at 0x20200109b000 00:06:16.446 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:16.446 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:06:16.446 EAL: probe driver: 8086:37c9 qat 00:06:16.446 EAL: PCI memory mapped at 0x20200109c000 00:06:16.446 EAL: PCI memory mapped at 0x20200109d000 00:06:16.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:16.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.446 EAL: PCI memory unmapped at 0x20200109c000 00:06:16.446 EAL: PCI memory unmapped at 0x20200109d000 00:06:16.446 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:16.446 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:06:16.446 EAL: probe driver: 8086:37c9 qat 00:06:16.446 EAL: PCI memory mapped at 0x20200109e000 00:06:16.446 EAL: PCI memory mapped at 0x20200109f000 00:06:16.446 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:16.446 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:16.446 EAL: PCI memory unmapped at 0x20200109e000 00:06:16.446 EAL: PCI memory unmapped at 0x20200109f000 00:06:16.446 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:16.446 EAL: No shared files mode enabled, IPC is disabled 00:06:16.446 EAL: No shared files mode enabled, IPC is disabled 00:06:16.446 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:16.446 EAL: Mem event callback 'spdk:(nil)' registered 00:06:16.446 00:06:16.446 00:06:16.446 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.446 http://cunit.sourceforge.net/ 00:06:16.446 00:06:16.446 00:06:16.446 Suite: components_suite 00:06:17.012 Test: vtophys_malloc_test ...passed 00:06:17.012 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:17.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.012 EAL: Restoring previous memory policy: 4 00:06:17.012 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.012 EAL: request: mp_malloc_sync 00:06:17.012 EAL: No shared files mode enabled, IPC is disabled 00:06:17.012 EAL: Heap on socket 0 was expanded by 4MB 00:06:17.012 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.012 EAL: request: mp_malloc_sync 00:06:17.012 EAL: No shared files mode enabled, IPC is disabled 00:06:17.012 EAL: Heap on socket 0 was shrunk by 4MB 00:06:17.012 EAL: Trying to obtain current memory policy. 00:06:17.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.012 EAL: Restoring previous memory policy: 4 00:06:17.012 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.012 EAL: request: mp_malloc_sync 00:06:17.012 EAL: No shared files mode enabled, IPC is disabled 00:06:17.012 EAL: Heap on socket 0 was expanded by 6MB 00:06:17.012 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.012 EAL: request: mp_malloc_sync 00:06:17.012 EAL: No shared files mode enabled, IPC is disabled 00:06:17.012 EAL: Heap on socket 0 was shrunk by 6MB 00:06:17.012 EAL: Trying to obtain current memory policy. 00:06:17.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.012 EAL: Restoring previous memory policy: 4 00:06:17.012 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.012 EAL: request: mp_malloc_sync 00:06:17.012 EAL: No shared files mode enabled, IPC is disabled 00:06:17.012 EAL: Heap on socket 0 was expanded by 10MB 00:06:17.012 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.012 EAL: request: mp_malloc_sync 00:06:17.012 EAL: No shared files mode enabled, IPC is disabled 00:06:17.012 EAL: Heap on socket 0 was shrunk by 10MB 00:06:17.012 EAL: Trying to obtain current memory policy. 00:06:17.012 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.012 EAL: Restoring previous memory policy: 4 00:06:17.012 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.012 EAL: request: mp_malloc_sync 00:06:17.012 EAL: No shared files mode enabled, IPC is disabled 00:06:17.012 EAL: Heap on socket 0 was expanded by 18MB 00:06:17.012 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.012 EAL: request: mp_malloc_sync 00:06:17.012 EAL: No shared files mode enabled, IPC is disabled 00:06:17.012 EAL: Heap on socket 0 was shrunk by 18MB 00:06:17.270 EAL: Trying to obtain current memory policy. 00:06:17.270 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.270 EAL: Restoring previous memory policy: 4 00:06:17.270 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.270 EAL: request: mp_malloc_sync 00:06:17.270 EAL: No shared files mode enabled, IPC is disabled 00:06:17.270 EAL: Heap on socket 0 was expanded by 34MB 00:06:17.270 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.270 EAL: request: mp_malloc_sync 00:06:17.270 EAL: No shared files mode enabled, IPC is disabled 00:06:17.270 EAL: Heap on socket 0 was shrunk by 34MB 00:06:17.270 EAL: Trying to obtain current memory policy. 00:06:17.270 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.270 EAL: Restoring previous memory policy: 4 00:06:17.270 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.270 EAL: request: mp_malloc_sync 00:06:17.271 EAL: No shared files mode enabled, IPC is disabled 00:06:17.271 EAL: Heap on socket 0 was expanded by 66MB 00:06:17.528 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.528 EAL: request: mp_malloc_sync 00:06:17.528 EAL: No shared files mode enabled, IPC is disabled 00:06:17.528 EAL: Heap on socket 0 was shrunk by 66MB 00:06:17.785 EAL: Trying to obtain current memory policy. 00:06:17.785 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:17.785 EAL: Restoring previous memory policy: 4 00:06:17.785 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.785 EAL: request: mp_malloc_sync 00:06:17.786 EAL: No shared files mode enabled, IPC is disabled 00:06:17.786 EAL: Heap on socket 0 was expanded by 130MB 00:06:18.044 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.044 EAL: request: mp_malloc_sync 00:06:18.044 EAL: No shared files mode enabled, IPC is disabled 00:06:18.044 EAL: Heap on socket 0 was shrunk by 130MB 00:06:18.302 EAL: Trying to obtain current memory policy. 00:06:18.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.559 EAL: Restoring previous memory policy: 4 00:06:18.559 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.559 EAL: request: mp_malloc_sync 00:06:18.559 EAL: No shared files mode enabled, IPC is disabled 00:06:18.559 EAL: Heap on socket 0 was expanded by 258MB 00:06:19.125 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.125 EAL: request: mp_malloc_sync 00:06:19.125 EAL: No shared files mode enabled, IPC is disabled 00:06:19.125 EAL: Heap on socket 0 was shrunk by 258MB 00:06:19.690 EAL: Trying to obtain current memory policy. 00:06:19.691 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.948 EAL: Restoring previous memory policy: 4 00:06:19.948 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.948 EAL: request: mp_malloc_sync 00:06:19.948 EAL: No shared files mode enabled, IPC is disabled 00:06:19.949 EAL: Heap on socket 0 was expanded by 514MB 00:06:21.323 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.323 EAL: request: mp_malloc_sync 00:06:21.323 EAL: No shared files mode enabled, IPC is disabled 00:06:21.323 EAL: Heap on socket 0 was shrunk by 514MB 00:06:22.694 EAL: Trying to obtain current memory policy. 00:06:22.695 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.695 EAL: Restoring previous memory policy: 4 00:06:22.695 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.695 EAL: request: mp_malloc_sync 00:06:22.695 EAL: No shared files mode enabled, IPC is disabled 00:06:22.695 EAL: Heap on socket 0 was expanded by 1026MB 00:06:25.973 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.973 EAL: request: mp_malloc_sync 00:06:25.973 EAL: No shared files mode enabled, IPC is disabled 00:06:25.973 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:28.496 passed 00:06:28.496 00:06:28.496 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.496 suites 1 1 n/a 0 0 00:06:28.496 tests 2 2 2 0 0 00:06:28.496 asserts 6335 6335 6335 0 n/a 00:06:28.496 00:06:28.496 Elapsed time = 11.403 seconds 00:06:28.496 EAL: No shared files mode enabled, IPC is disabled 00:06:28.496 EAL: No shared files mode enabled, IPC is disabled 00:06:28.496 EAL: No shared files mode enabled, IPC is disabled 00:06:28.496 00:06:28.496 real 0m11.806s 00:06:28.496 user 0m10.715s 00:06:28.496 sys 0m1.021s 00:06:28.496 10:48:35 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.496 10:48:35 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:28.496 ************************************ 00:06:28.496 END TEST env_vtophys 00:06:28.496 ************************************ 00:06:28.496 10:48:35 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:06:28.496 10:48:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:28.496 10:48:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.496 10:48:35 env -- common/autotest_common.sh@10 -- # set +x 00:06:28.496 ************************************ 00:06:28.496 START TEST env_pci 00:06:28.496 ************************************ 00:06:28.496 10:48:35 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:06:28.496 00:06:28.496 00:06:28.496 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.496 http://cunit.sourceforge.net/ 00:06:28.496 00:06:28.496 00:06:28.496 Suite: pci 00:06:28.496 Test: pci_hook ...[2024-07-25 10:48:35.175709] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3464867 has claimed it 00:06:28.496 EAL: Cannot find device (10000:00:01.0) 00:06:28.496 EAL: Failed to attach device on primary process 00:06:28.496 passed 00:06:28.496 00:06:28.496 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.496 suites 1 1 n/a 0 0 00:06:28.496 tests 1 1 1 0 0 00:06:28.496 asserts 25 25 25 0 n/a 00:06:28.496 00:06:28.496 Elapsed time = 0.088 seconds 00:06:28.496 00:06:28.496 real 0m0.191s 00:06:28.496 user 0m0.066s 00:06:28.496 sys 0m0.124s 00:06:28.496 10:48:35 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.496 10:48:35 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:28.496 ************************************ 00:06:28.496 END TEST env_pci 00:06:28.496 ************************************ 00:06:28.496 10:48:35 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:28.496 10:48:35 env -- env/env.sh@15 -- # uname 00:06:28.496 10:48:35 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:28.496 10:48:35 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:28.496 10:48:35 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:28.496 10:48:35 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:28.496 10:48:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.496 10:48:35 env -- common/autotest_common.sh@10 -- # set +x 00:06:28.496 ************************************ 00:06:28.496 START TEST env_dpdk_post_init 00:06:28.496 ************************************ 00:06:28.496 10:48:35 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:28.497 EAL: Detected CPU lcores: 112 00:06:28.497 EAL: Detected NUMA nodes: 2 00:06:28.497 EAL: Detected shared linkage of DPDK 00:06:28.497 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:28.497 EAL: Selected IOVA mode 'PA' 00:06:28.497 EAL: VFIO support initialized 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.497 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:06:28.497 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.497 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:28.498 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:06:28.498 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:28.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.498 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:28.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.498 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:28.498 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:28.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.499 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:28.499 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:28.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.499 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:28.499 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:28.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.499 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:28.499 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:28.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.499 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:28.499 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:28.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.499 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:28.499 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:28.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.499 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:28.499 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:28.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.499 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:28.499 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:28.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.499 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:28.499 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:28.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.499 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:28.499 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:28.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.499 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:28.499 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:28.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.499 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:28.499 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:28.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.499 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:28.756 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:28.756 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:28.756 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:28.756 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:28.756 EAL: Using IOMMU type 1 (Type 1) 00:06:28.756 EAL: Ignore mapping IO port bar(1) 00:06:28.756 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:28.756 EAL: Ignore mapping IO port bar(1) 00:06:28.756 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:28.756 EAL: Ignore mapping IO port bar(1) 00:06:28.756 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:28.756 EAL: Ignore mapping IO port bar(1) 00:06:28.756 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:29.014 EAL: Ignore mapping IO port bar(1) 00:06:29.014 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:29.014 EAL: Ignore mapping IO port bar(1) 00:06:29.014 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:29.014 EAL: Ignore mapping IO port bar(1) 00:06:29.014 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:29.014 EAL: Ignore mapping IO port bar(1) 00:06:29.014 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:29.014 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:29.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.014 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:29.014 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:29.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.014 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:29.014 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:29.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.014 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:29.014 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:29.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.014 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:29.015 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:29.015 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:29.015 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:29.015 EAL: Ignore mapping IO port bar(1) 00:06:29.015 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:29.015 EAL: Ignore mapping IO port bar(1) 00:06:29.015 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:29.015 EAL: Ignore mapping IO port bar(1) 00:06:29.015 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:29.015 EAL: Ignore mapping IO port bar(1) 00:06:29.015 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:29.015 EAL: Ignore mapping IO port bar(1) 00:06:29.015 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:29.015 EAL: Ignore mapping IO port bar(1) 00:06:29.015 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:29.015 EAL: Ignore mapping IO port bar(1) 00:06:29.015 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:29.015 EAL: Ignore mapping IO port bar(1) 00:06:29.015 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:29.947 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:06:34.153 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:06:34.153 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001120000 00:06:34.153 Starting DPDK initialization... 00:06:34.153 Starting SPDK post initialization... 00:06:34.153 SPDK NVMe probe 00:06:34.153 Attaching to 0000:d8:00.0 00:06:34.153 Attached to 0000:d8:00.0 00:06:34.153 Cleaning up... 00:06:34.153 00:06:34.153 real 0m5.559s 00:06:34.153 user 0m4.057s 00:06:34.153 sys 0m0.552s 00:06:34.153 10:48:40 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.153 10:48:40 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:34.153 ************************************ 00:06:34.153 END TEST env_dpdk_post_init 00:06:34.153 ************************************ 00:06:34.153 10:48:41 env -- env/env.sh@26 -- # uname 00:06:34.153 10:48:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:34.153 10:48:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:34.153 10:48:41 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.153 10:48:41 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.153 10:48:41 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.153 ************************************ 00:06:34.153 START TEST env_mem_callbacks 00:06:34.153 ************************************ 00:06:34.153 10:48:41 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:34.153 EAL: Detected CPU lcores: 112 00:06:34.153 EAL: Detected NUMA nodes: 2 00:06:34.153 EAL: Detected shared linkage of DPDK 00:06:34.153 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:34.153 EAL: Selected IOVA mode 'PA' 00:06:34.153 EAL: VFIO support initialized 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.0 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.0_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.1 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.1_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.2 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.2_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.3 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.3_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.4 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.4_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.5 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.5_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.6 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.6_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:01.7 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:01.7_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.0 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.0_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.1 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.1_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.2 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.2_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.3 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.3_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.4 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.4_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.5 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.5_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.6 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.6_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1a:02.7 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1a:02.7_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1a:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.0 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1c:01.0_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.1 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1c:01.1_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.2 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1c:01.2_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.3 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1c:01.3_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.4 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1c:01.4_qat_sym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.153 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.5 (socket 0) 00:06:34.153 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_asym 00:06:34.153 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:01.5_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.6 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:01.6_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:01.7 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:01.7_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.0 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.0_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.1 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.1_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.2 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.2_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.3 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.3_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.4 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.4_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.5 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.5_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.6 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.6_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1c:02.7 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1c:02.7_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1c:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.0 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.0_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.1 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.1_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.2 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.2_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.3 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.3_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.4 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.4_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.5 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.5_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.6 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.6_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:01.7 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:01.7_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.0 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.0_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.1 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.1_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.2 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.2_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.3 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.3_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.4 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.4_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.5 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.5_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.6 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.6_qat_sym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.154 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:1e:02.7 (socket 0) 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_asym 00:06:34.154 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:06:34.154 CRYPTODEV: Creating cryptodev 0000:1e:02.7_qat_sym 00:06:34.155 CRYPTODEV: Initialisation parameters - name: 0000:1e:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:34.155 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:06:34.155 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.155 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:34.155 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:34.155 00:06:34.155 00:06:34.155 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.155 http://cunit.sourceforge.net/ 00:06:34.155 00:06:34.155 00:06:34.155 Suite: memory 00:06:34.155 Test: test ... 00:06:34.155 register 0x200000200000 2097152 00:06:34.155 malloc 3145728 00:06:34.155 register 0x200000400000 4194304 00:06:34.155 buf 0x2000004fffc0 len 3145728 PASSED 00:06:34.155 malloc 64 00:06:34.155 buf 0x2000004ffec0 len 64 PASSED 00:06:34.155 malloc 4194304 00:06:34.155 register 0x200000800000 6291456 00:06:34.155 buf 0x2000009fffc0 len 4194304 PASSED 00:06:34.155 free 0x2000004fffc0 3145728 00:06:34.155 free 0x2000004ffec0 64 00:06:34.413 unregister 0x200000400000 4194304 PASSED 00:06:34.413 free 0x2000009fffc0 4194304 00:06:34.413 unregister 0x200000800000 6291456 PASSED 00:06:34.413 malloc 8388608 00:06:34.413 register 0x200000400000 10485760 00:06:34.413 buf 0x2000005fffc0 len 8388608 PASSED 00:06:34.413 free 0x2000005fffc0 8388608 00:06:34.413 unregister 0x200000400000 10485760 PASSED 00:06:34.413 passed 00:06:34.413 00:06:34.413 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.413 suites 1 1 n/a 0 0 00:06:34.413 tests 1 1 1 0 0 00:06:34.413 asserts 15 15 15 0 n/a 00:06:34.413 00:06:34.413 Elapsed time = 0.088 seconds 00:06:34.413 00:06:34.413 real 0m0.305s 00:06:34.413 user 0m0.160s 00:06:34.413 sys 0m0.143s 00:06:34.413 10:48:41 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.413 10:48:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:34.413 ************************************ 00:06:34.413 END TEST env_mem_callbacks 00:06:34.413 ************************************ 00:06:34.413 00:06:34.413 real 0m18.729s 00:06:34.413 user 0m15.486s 00:06:34.413 sys 0m2.261s 00:06:34.413 10:48:41 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.413 10:48:41 env -- common/autotest_common.sh@10 -- # set +x 00:06:34.413 ************************************ 00:06:34.413 END TEST env 00:06:34.413 ************************************ 00:06:34.413 10:48:41 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:06:34.413 10:48:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.413 10:48:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.413 10:48:41 -- common/autotest_common.sh@10 -- # set +x 00:06:34.413 ************************************ 00:06:34.413 START TEST rpc 00:06:34.413 ************************************ 00:06:34.413 10:48:41 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:06:34.672 * Looking for test storage... 00:06:34.672 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:34.672 10:48:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3466128 00:06:34.672 10:48:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.672 10:48:41 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:34.672 10:48:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3466128 00:06:34.672 10:48:41 rpc -- common/autotest_common.sh@831 -- # '[' -z 3466128 ']' 00:06:34.672 10:48:41 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.672 10:48:41 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.672 10:48:41 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.672 10:48:41 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.672 10:48:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.672 [2024-07-25 10:48:41.712965] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:34.672 [2024-07-25 10:48:41.713083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466128 ] 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:34.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:34.930 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:34.930 [2024-07-25 10:48:41.940848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.188 [2024-07-25 10:48:42.217610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:35.188 [2024-07-25 10:48:42.217670] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3466128' to capture a snapshot of events at runtime. 00:06:35.188 [2024-07-25 10:48:42.217687] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.188 [2024-07-25 10:48:42.217705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.188 [2024-07-25 10:48:42.217719] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3466128 for offline analysis/debug. 00:06:35.188 [2024-07-25 10:48:42.217772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.563 10:48:43 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.563 10:48:43 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:36.563 10:48:43 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:36.563 10:48:43 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:36.563 10:48:43 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:36.563 10:48:43 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:36.563 10:48:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.563 10:48:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.563 10:48:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.563 ************************************ 00:06:36.563 START TEST rpc_integrity 00:06:36.563 ************************************ 00:06:36.563 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:36.563 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:36.563 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.563 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.563 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.563 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:36.563 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:36.563 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:36.563 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:36.563 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.563 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.563 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.563 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:36.563 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:36.563 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.563 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.563 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.563 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:36.563 { 00:06:36.563 "name": "Malloc0", 00:06:36.563 "aliases": [ 00:06:36.563 "e9961fe7-ff76-44c5-8bd2-c599a60314bf" 00:06:36.563 ], 00:06:36.563 "product_name": "Malloc disk", 00:06:36.563 "block_size": 512, 00:06:36.563 "num_blocks": 16384, 00:06:36.564 "uuid": "e9961fe7-ff76-44c5-8bd2-c599a60314bf", 00:06:36.564 "assigned_rate_limits": { 00:06:36.564 "rw_ios_per_sec": 0, 00:06:36.564 "rw_mbytes_per_sec": 0, 00:06:36.564 "r_mbytes_per_sec": 0, 00:06:36.564 "w_mbytes_per_sec": 0 00:06:36.564 }, 00:06:36.564 "claimed": false, 00:06:36.564 "zoned": false, 00:06:36.564 "supported_io_types": { 00:06:36.564 "read": true, 00:06:36.564 "write": true, 00:06:36.564 "unmap": true, 00:06:36.564 "flush": true, 00:06:36.564 "reset": true, 00:06:36.564 "nvme_admin": false, 00:06:36.564 "nvme_io": false, 00:06:36.564 "nvme_io_md": false, 00:06:36.564 "write_zeroes": true, 00:06:36.564 "zcopy": true, 00:06:36.564 "get_zone_info": false, 00:06:36.564 "zone_management": false, 00:06:36.564 "zone_append": false, 00:06:36.564 "compare": false, 00:06:36.564 "compare_and_write": false, 00:06:36.564 "abort": true, 00:06:36.564 "seek_hole": false, 00:06:36.564 "seek_data": false, 00:06:36.564 "copy": true, 00:06:36.564 "nvme_iov_md": false 00:06:36.564 }, 00:06:36.564 "memory_domains": [ 00:06:36.564 { 00:06:36.564 "dma_device_id": "system", 00:06:36.564 "dma_device_type": 1 00:06:36.564 }, 00:06:36.564 { 00:06:36.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.564 "dma_device_type": 2 00:06:36.564 } 00:06:36.564 ], 00:06:36.564 "driver_specific": {} 00:06:36.564 } 00:06:36.564 ]' 00:06:36.564 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:36.564 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:36.564 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:36.564 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.564 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.564 [2024-07-25 10:48:43.615991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:36.564 [2024-07-25 10:48:43.616066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.564 [2024-07-25 10:48:43.616097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003fc80 00:06:36.564 [2024-07-25 10:48:43.616116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.564 [2024-07-25 10:48:43.618882] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.564 [2024-07-25 10:48:43.618934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:36.564 Passthru0 00:06:36.564 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.564 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:36.564 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.564 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.564 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.564 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:36.564 { 00:06:36.564 "name": "Malloc0", 00:06:36.564 "aliases": [ 00:06:36.564 "e9961fe7-ff76-44c5-8bd2-c599a60314bf" 00:06:36.564 ], 00:06:36.564 "product_name": "Malloc disk", 00:06:36.564 "block_size": 512, 00:06:36.564 "num_blocks": 16384, 00:06:36.564 "uuid": "e9961fe7-ff76-44c5-8bd2-c599a60314bf", 00:06:36.564 "assigned_rate_limits": { 00:06:36.564 "rw_ios_per_sec": 0, 00:06:36.564 "rw_mbytes_per_sec": 0, 00:06:36.564 "r_mbytes_per_sec": 0, 00:06:36.564 "w_mbytes_per_sec": 0 00:06:36.564 }, 00:06:36.564 "claimed": true, 00:06:36.564 "claim_type": "exclusive_write", 00:06:36.564 "zoned": false, 00:06:36.564 "supported_io_types": { 00:06:36.564 "read": true, 00:06:36.564 "write": true, 00:06:36.564 "unmap": true, 00:06:36.564 "flush": true, 00:06:36.564 "reset": true, 00:06:36.564 "nvme_admin": false, 00:06:36.564 "nvme_io": false, 00:06:36.564 "nvme_io_md": false, 00:06:36.564 "write_zeroes": true, 00:06:36.564 "zcopy": true, 00:06:36.564 "get_zone_info": false, 00:06:36.564 "zone_management": false, 00:06:36.564 "zone_append": false, 00:06:36.564 "compare": false, 00:06:36.564 "compare_and_write": false, 00:06:36.564 "abort": true, 00:06:36.564 "seek_hole": false, 00:06:36.564 "seek_data": false, 00:06:36.564 "copy": true, 00:06:36.564 "nvme_iov_md": false 00:06:36.564 }, 00:06:36.564 "memory_domains": [ 00:06:36.564 { 00:06:36.564 "dma_device_id": "system", 00:06:36.564 "dma_device_type": 1 00:06:36.564 }, 00:06:36.564 { 00:06:36.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.564 "dma_device_type": 2 00:06:36.564 } 00:06:36.564 ], 00:06:36.564 "driver_specific": {} 00:06:36.564 }, 00:06:36.564 { 00:06:36.564 "name": "Passthru0", 00:06:36.564 "aliases": [ 00:06:36.564 "288d48d3-5ca5-54bc-a995-dce1d7d61bfd" 00:06:36.564 ], 00:06:36.564 "product_name": "passthru", 00:06:36.564 "block_size": 512, 00:06:36.564 "num_blocks": 16384, 00:06:36.564 "uuid": "288d48d3-5ca5-54bc-a995-dce1d7d61bfd", 00:06:36.564 "assigned_rate_limits": { 00:06:36.564 "rw_ios_per_sec": 0, 00:06:36.564 "rw_mbytes_per_sec": 0, 00:06:36.564 "r_mbytes_per_sec": 0, 00:06:36.564 "w_mbytes_per_sec": 0 00:06:36.564 }, 00:06:36.564 "claimed": false, 00:06:36.564 "zoned": false, 00:06:36.564 "supported_io_types": { 00:06:36.564 "read": true, 00:06:36.564 "write": true, 00:06:36.564 "unmap": true, 00:06:36.564 "flush": true, 00:06:36.564 "reset": true, 00:06:36.564 "nvme_admin": false, 00:06:36.564 "nvme_io": false, 00:06:36.564 "nvme_io_md": false, 00:06:36.564 "write_zeroes": true, 00:06:36.564 "zcopy": true, 00:06:36.564 "get_zone_info": false, 00:06:36.564 "zone_management": false, 00:06:36.564 "zone_append": false, 00:06:36.564 "compare": false, 00:06:36.564 "compare_and_write": false, 00:06:36.564 "abort": true, 00:06:36.564 "seek_hole": false, 00:06:36.564 "seek_data": false, 00:06:36.564 "copy": true, 00:06:36.564 "nvme_iov_md": false 00:06:36.564 }, 00:06:36.564 "memory_domains": [ 00:06:36.564 { 00:06:36.564 "dma_device_id": "system", 00:06:36.564 "dma_device_type": 1 00:06:36.564 }, 00:06:36.564 { 00:06:36.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.564 "dma_device_type": 2 00:06:36.564 } 00:06:36.564 ], 00:06:36.564 "driver_specific": { 00:06:36.564 "passthru": { 00:06:36.564 "name": "Passthru0", 00:06:36.564 "base_bdev_name": "Malloc0" 00:06:36.564 } 00:06:36.564 } 00:06:36.564 } 00:06:36.564 ]' 00:06:36.564 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:36.822 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:36.822 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:36.822 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.822 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.822 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.822 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:36.822 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.822 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.822 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.822 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:36.822 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.822 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.822 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.822 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:36.822 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:36.822 10:48:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:36.822 00:06:36.822 real 0m0.292s 00:06:36.822 user 0m0.169s 00:06:36.822 sys 0m0.041s 00:06:36.822 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.822 10:48:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:36.822 ************************************ 00:06:36.822 END TEST rpc_integrity 00:06:36.822 ************************************ 00:06:36.822 10:48:43 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:36.822 10:48:43 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.822 10:48:43 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.822 10:48:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.822 ************************************ 00:06:36.822 START TEST rpc_plugins 00:06:36.822 ************************************ 00:06:36.822 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:36.822 10:48:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:36.822 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.822 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.822 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.822 10:48:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:36.822 10:48:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:36.822 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.822 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.822 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.822 10:48:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:36.822 { 00:06:36.822 "name": "Malloc1", 00:06:36.822 "aliases": [ 00:06:36.822 "28ae011a-7adc-4485-bef8-f1f3ce1d1c8d" 00:06:36.822 ], 00:06:36.822 "product_name": "Malloc disk", 00:06:36.822 "block_size": 4096, 00:06:36.822 "num_blocks": 256, 00:06:36.822 "uuid": "28ae011a-7adc-4485-bef8-f1f3ce1d1c8d", 00:06:36.822 "assigned_rate_limits": { 00:06:36.822 "rw_ios_per_sec": 0, 00:06:36.822 "rw_mbytes_per_sec": 0, 00:06:36.822 "r_mbytes_per_sec": 0, 00:06:36.822 "w_mbytes_per_sec": 0 00:06:36.822 }, 00:06:36.822 "claimed": false, 00:06:36.822 "zoned": false, 00:06:36.822 "supported_io_types": { 00:06:36.822 "read": true, 00:06:36.822 "write": true, 00:06:36.822 "unmap": true, 00:06:36.822 "flush": true, 00:06:36.822 "reset": true, 00:06:36.822 "nvme_admin": false, 00:06:36.822 "nvme_io": false, 00:06:36.822 "nvme_io_md": false, 00:06:36.822 "write_zeroes": true, 00:06:36.822 "zcopy": true, 00:06:36.822 "get_zone_info": false, 00:06:36.822 "zone_management": false, 00:06:36.822 "zone_append": false, 00:06:36.822 "compare": false, 00:06:36.822 "compare_and_write": false, 00:06:36.822 "abort": true, 00:06:36.822 "seek_hole": false, 00:06:36.822 "seek_data": false, 00:06:36.822 "copy": true, 00:06:36.822 "nvme_iov_md": false 00:06:36.822 }, 00:06:36.822 "memory_domains": [ 00:06:36.822 { 00:06:36.822 "dma_device_id": "system", 00:06:36.822 "dma_device_type": 1 00:06:36.822 }, 00:06:36.822 { 00:06:36.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:36.823 "dma_device_type": 2 00:06:36.823 } 00:06:36.823 ], 00:06:36.823 "driver_specific": {} 00:06:36.823 } 00:06:36.823 ]' 00:06:36.823 10:48:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:36.823 10:48:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:36.823 10:48:43 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:36.823 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.823 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.823 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.823 10:48:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:36.823 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.823 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:36.823 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.823 10:48:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:36.823 10:48:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:37.080 10:48:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:37.080 00:06:37.080 real 0m0.137s 00:06:37.080 user 0m0.084s 00:06:37.080 sys 0m0.018s 00:06:37.080 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.080 10:48:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:37.080 ************************************ 00:06:37.080 END TEST rpc_plugins 00:06:37.080 ************************************ 00:06:37.080 10:48:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:37.080 10:48:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.080 10:48:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.080 10:48:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.080 ************************************ 00:06:37.080 START TEST rpc_trace_cmd_test 00:06:37.080 ************************************ 00:06:37.080 10:48:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:37.080 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:37.080 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:37.080 10:48:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.080 10:48:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.080 10:48:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.080 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:37.080 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3466128", 00:06:37.080 "tpoint_group_mask": "0x8", 00:06:37.080 "iscsi_conn": { 00:06:37.080 "mask": "0x2", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 }, 00:06:37.080 "scsi": { 00:06:37.080 "mask": "0x4", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 }, 00:06:37.080 "bdev": { 00:06:37.080 "mask": "0x8", 00:06:37.080 "tpoint_mask": "0xffffffffffffffff" 00:06:37.080 }, 00:06:37.080 "nvmf_rdma": { 00:06:37.080 "mask": "0x10", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 }, 00:06:37.080 "nvmf_tcp": { 00:06:37.080 "mask": "0x20", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 }, 00:06:37.080 "ftl": { 00:06:37.080 "mask": "0x40", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 }, 00:06:37.080 "blobfs": { 00:06:37.080 "mask": "0x80", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 }, 00:06:37.080 "dsa": { 00:06:37.080 "mask": "0x200", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 }, 00:06:37.080 "thread": { 00:06:37.080 "mask": "0x400", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 }, 00:06:37.080 "nvme_pcie": { 00:06:37.080 "mask": "0x800", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 }, 00:06:37.080 "iaa": { 00:06:37.080 "mask": "0x1000", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 }, 00:06:37.080 "nvme_tcp": { 00:06:37.080 "mask": "0x2000", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 }, 00:06:37.080 "bdev_nvme": { 00:06:37.080 "mask": "0x4000", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 }, 00:06:37.080 "sock": { 00:06:37.080 "mask": "0x8000", 00:06:37.080 "tpoint_mask": "0x0" 00:06:37.080 } 00:06:37.080 }' 00:06:37.080 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:37.080 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:37.080 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:37.080 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:37.080 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:37.337 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:37.337 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:37.337 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:37.337 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:37.337 10:48:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:37.337 00:06:37.337 real 0m0.235s 00:06:37.337 user 0m0.195s 00:06:37.337 sys 0m0.030s 00:06:37.337 10:48:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.337 10:48:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:37.337 ************************************ 00:06:37.337 END TEST rpc_trace_cmd_test 00:06:37.337 ************************************ 00:06:37.337 10:48:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:37.337 10:48:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:37.337 10:48:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:37.337 10:48:44 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.337 10:48:44 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.337 10:48:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.337 ************************************ 00:06:37.337 START TEST rpc_daemon_integrity 00:06:37.337 ************************************ 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:37.337 { 00:06:37.337 "name": "Malloc2", 00:06:37.337 "aliases": [ 00:06:37.337 "fcafba8e-5e34-4b81-b5e5-571448a74a70" 00:06:37.337 ], 00:06:37.337 "product_name": "Malloc disk", 00:06:37.337 "block_size": 512, 00:06:37.337 "num_blocks": 16384, 00:06:37.337 "uuid": "fcafba8e-5e34-4b81-b5e5-571448a74a70", 00:06:37.337 "assigned_rate_limits": { 00:06:37.337 "rw_ios_per_sec": 0, 00:06:37.337 "rw_mbytes_per_sec": 0, 00:06:37.337 "r_mbytes_per_sec": 0, 00:06:37.337 "w_mbytes_per_sec": 0 00:06:37.337 }, 00:06:37.337 "claimed": false, 00:06:37.337 "zoned": false, 00:06:37.337 "supported_io_types": { 00:06:37.337 "read": true, 00:06:37.337 "write": true, 00:06:37.337 "unmap": true, 00:06:37.337 "flush": true, 00:06:37.337 "reset": true, 00:06:37.337 "nvme_admin": false, 00:06:37.337 "nvme_io": false, 00:06:37.337 "nvme_io_md": false, 00:06:37.337 "write_zeroes": true, 00:06:37.337 "zcopy": true, 00:06:37.337 "get_zone_info": false, 00:06:37.337 "zone_management": false, 00:06:37.337 "zone_append": false, 00:06:37.337 "compare": false, 00:06:37.337 "compare_and_write": false, 00:06:37.337 "abort": true, 00:06:37.337 "seek_hole": false, 00:06:37.337 "seek_data": false, 00:06:37.337 "copy": true, 00:06:37.337 "nvme_iov_md": false 00:06:37.337 }, 00:06:37.337 "memory_domains": [ 00:06:37.337 { 00:06:37.337 "dma_device_id": "system", 00:06:37.337 "dma_device_type": 1 00:06:37.337 }, 00:06:37.337 { 00:06:37.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.337 "dma_device_type": 2 00:06:37.337 } 00:06:37.337 ], 00:06:37.337 "driver_specific": {} 00:06:37.337 } 00:06:37.337 ]' 00:06:37.337 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.595 [2024-07-25 10:48:44.497547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:37.595 [2024-07-25 10:48:44.497608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:37.595 [2024-07-25 10:48:44.497634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:06:37.595 [2024-07-25 10:48:44.497653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:37.595 [2024-07-25 10:48:44.500388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:37.595 [2024-07-25 10:48:44.500424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:37.595 Passthru0 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:37.595 { 00:06:37.595 "name": "Malloc2", 00:06:37.595 "aliases": [ 00:06:37.595 "fcafba8e-5e34-4b81-b5e5-571448a74a70" 00:06:37.595 ], 00:06:37.595 "product_name": "Malloc disk", 00:06:37.595 "block_size": 512, 00:06:37.595 "num_blocks": 16384, 00:06:37.595 "uuid": "fcafba8e-5e34-4b81-b5e5-571448a74a70", 00:06:37.595 "assigned_rate_limits": { 00:06:37.595 "rw_ios_per_sec": 0, 00:06:37.595 "rw_mbytes_per_sec": 0, 00:06:37.595 "r_mbytes_per_sec": 0, 00:06:37.595 "w_mbytes_per_sec": 0 00:06:37.595 }, 00:06:37.595 "claimed": true, 00:06:37.595 "claim_type": "exclusive_write", 00:06:37.595 "zoned": false, 00:06:37.595 "supported_io_types": { 00:06:37.595 "read": true, 00:06:37.595 "write": true, 00:06:37.595 "unmap": true, 00:06:37.595 "flush": true, 00:06:37.595 "reset": true, 00:06:37.595 "nvme_admin": false, 00:06:37.595 "nvme_io": false, 00:06:37.595 "nvme_io_md": false, 00:06:37.595 "write_zeroes": true, 00:06:37.595 "zcopy": true, 00:06:37.595 "get_zone_info": false, 00:06:37.595 "zone_management": false, 00:06:37.595 "zone_append": false, 00:06:37.595 "compare": false, 00:06:37.595 "compare_and_write": false, 00:06:37.595 "abort": true, 00:06:37.595 "seek_hole": false, 00:06:37.595 "seek_data": false, 00:06:37.595 "copy": true, 00:06:37.595 "nvme_iov_md": false 00:06:37.595 }, 00:06:37.595 "memory_domains": [ 00:06:37.595 { 00:06:37.595 "dma_device_id": "system", 00:06:37.595 "dma_device_type": 1 00:06:37.595 }, 00:06:37.595 { 00:06:37.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.595 "dma_device_type": 2 00:06:37.595 } 00:06:37.595 ], 00:06:37.595 "driver_specific": {} 00:06:37.595 }, 00:06:37.595 { 00:06:37.595 "name": "Passthru0", 00:06:37.595 "aliases": [ 00:06:37.595 "01557f19-8767-561a-86b0-3d4c8d7740e7" 00:06:37.595 ], 00:06:37.595 "product_name": "passthru", 00:06:37.595 "block_size": 512, 00:06:37.595 "num_blocks": 16384, 00:06:37.595 "uuid": "01557f19-8767-561a-86b0-3d4c8d7740e7", 00:06:37.595 "assigned_rate_limits": { 00:06:37.595 "rw_ios_per_sec": 0, 00:06:37.595 "rw_mbytes_per_sec": 0, 00:06:37.595 "r_mbytes_per_sec": 0, 00:06:37.595 "w_mbytes_per_sec": 0 00:06:37.595 }, 00:06:37.595 "claimed": false, 00:06:37.595 "zoned": false, 00:06:37.595 "supported_io_types": { 00:06:37.595 "read": true, 00:06:37.595 "write": true, 00:06:37.595 "unmap": true, 00:06:37.595 "flush": true, 00:06:37.595 "reset": true, 00:06:37.595 "nvme_admin": false, 00:06:37.595 "nvme_io": false, 00:06:37.595 "nvme_io_md": false, 00:06:37.595 "write_zeroes": true, 00:06:37.595 "zcopy": true, 00:06:37.595 "get_zone_info": false, 00:06:37.595 "zone_management": false, 00:06:37.595 "zone_append": false, 00:06:37.595 "compare": false, 00:06:37.595 "compare_and_write": false, 00:06:37.595 "abort": true, 00:06:37.595 "seek_hole": false, 00:06:37.595 "seek_data": false, 00:06:37.595 "copy": true, 00:06:37.595 "nvme_iov_md": false 00:06:37.595 }, 00:06:37.595 "memory_domains": [ 00:06:37.595 { 00:06:37.595 "dma_device_id": "system", 00:06:37.595 "dma_device_type": 1 00:06:37.595 }, 00:06:37.595 { 00:06:37.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.595 "dma_device_type": 2 00:06:37.595 } 00:06:37.595 ], 00:06:37.595 "driver_specific": { 00:06:37.595 "passthru": { 00:06:37.595 "name": "Passthru0", 00:06:37.595 "base_bdev_name": "Malloc2" 00:06:37.595 } 00:06:37.595 } 00:06:37.595 } 00:06:37.595 ]' 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:37.595 00:06:37.595 real 0m0.300s 00:06:37.595 user 0m0.176s 00:06:37.595 sys 0m0.047s 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.595 10:48:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:37.595 ************************************ 00:06:37.595 END TEST rpc_daemon_integrity 00:06:37.595 ************************************ 00:06:37.595 10:48:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:37.595 10:48:44 rpc -- rpc/rpc.sh@84 -- # killprocess 3466128 00:06:37.595 10:48:44 rpc -- common/autotest_common.sh@950 -- # '[' -z 3466128 ']' 00:06:37.595 10:48:44 rpc -- common/autotest_common.sh@954 -- # kill -0 3466128 00:06:37.595 10:48:44 rpc -- common/autotest_common.sh@955 -- # uname 00:06:37.595 10:48:44 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.595 10:48:44 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3466128 00:06:37.853 10:48:44 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.853 10:48:44 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.853 10:48:44 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3466128' 00:06:37.853 killing process with pid 3466128 00:06:37.853 10:48:44 rpc -- common/autotest_common.sh@969 -- # kill 3466128 00:06:37.853 10:48:44 rpc -- common/autotest_common.sh@974 -- # wait 3466128 00:06:41.134 00:06:41.134 real 0m6.521s 00:06:41.134 user 0m7.002s 00:06:41.134 sys 0m1.073s 00:06:41.134 10:48:48 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.134 10:48:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.134 ************************************ 00:06:41.134 END TEST rpc 00:06:41.134 ************************************ 00:06:41.134 10:48:48 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:41.134 10:48:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.134 10:48:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.134 10:48:48 -- common/autotest_common.sh@10 -- # set +x 00:06:41.134 ************************************ 00:06:41.134 START TEST skip_rpc 00:06:41.134 ************************************ 00:06:41.134 10:48:48 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:41.134 * Looking for test storage... 00:06:41.134 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:06:41.134 10:48:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:41.134 10:48:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:06:41.134 10:48:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:41.134 10:48:48 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.134 10:48:48 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.134 10:48:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.134 ************************************ 00:06:41.134 START TEST skip_rpc 00:06:41.134 ************************************ 00:06:41.134 10:48:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:41.134 10:48:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3467366 00:06:41.134 10:48:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.134 10:48:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:41.134 10:48:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:41.393 [2024-07-25 10:48:48.357930] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:41.393 [2024-07-25 10:48:48.358049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467366 ] 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.393 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:41.393 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:41.394 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:41.652 [2024-07-25 10:48:48.583765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.911 [2024-07-25 10:48:48.860292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3467366 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3467366 ']' 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3467366 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:46.140 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.399 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3467366 00:06:46.399 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.399 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.399 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3467366' 00:06:46.399 killing process with pid 3467366 00:06:46.399 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3467366 00:06:46.399 10:48:53 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3467366 00:06:49.687 00:06:49.687 real 0m8.351s 00:06:49.687 user 0m7.815s 00:06:49.687 sys 0m0.543s 00:06:49.687 10:48:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.687 10:48:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.687 ************************************ 00:06:49.687 END TEST skip_rpc 00:06:49.687 ************************************ 00:06:49.687 10:48:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:49.687 10:48:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.687 10:48:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.687 10:48:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.687 ************************************ 00:06:49.687 START TEST skip_rpc_with_json 00:06:49.687 ************************************ 00:06:49.687 10:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:49.687 10:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:49.687 10:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3468833 00:06:49.687 10:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.687 10:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.687 10:48:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3468833 00:06:49.687 10:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3468833 ']' 00:06:49.687 10:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.687 10:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.687 10:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.687 10:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.687 10:48:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:49.687 [2024-07-25 10:48:56.790048] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:49.687 [2024-07-25 10:48:56.790178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468833 ] 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:01.0 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:01.1 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:01.2 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:01.3 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:01.4 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:01.5 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:01.6 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:01.7 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:02.0 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:02.1 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:02.2 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:02.3 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:02.4 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:02.5 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:02.6 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3d:02.7 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:01.0 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:01.1 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:01.2 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:01.3 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:01.4 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:01.5 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:01.6 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:01.7 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:02.0 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:02.1 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:02.2 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:02.3 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:02.4 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:02.5 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:02.6 cannot be used 00:06:49.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:06:49.946 EAL: Requested device 0000:3f:02.7 cannot be used 00:06:49.946 [2024-07-25 10:48:57.018261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.205 [2024-07-25 10:48:57.307617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.617 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.618 [2024-07-25 10:48:58.534472] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:51.618 request: 00:06:51.618 { 00:06:51.618 "trtype": "tcp", 00:06:51.618 "method": "nvmf_get_transports", 00:06:51.618 "req_id": 1 00:06:51.618 } 00:06:51.618 Got JSON-RPC error response 00:06:51.618 response: 00:06:51.618 { 00:06:51.618 "code": -19, 00:06:51.618 "message": "No such device" 00:06:51.618 } 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.618 [2024-07-25 10:48:58.546607] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.618 10:48:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:06:51.618 { 00:06:51.618 "subsystems": [ 00:06:51.618 { 00:06:51.618 "subsystem": "keyring", 00:06:51.618 "config": [] 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "subsystem": "iobuf", 00:06:51.618 "config": [ 00:06:51.618 { 00:06:51.618 "method": "iobuf_set_options", 00:06:51.618 "params": { 00:06:51.618 "small_pool_count": 8192, 00:06:51.618 "large_pool_count": 1024, 00:06:51.618 "small_bufsize": 8192, 00:06:51.618 "large_bufsize": 135168 00:06:51.618 } 00:06:51.618 } 00:06:51.618 ] 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "subsystem": "sock", 00:06:51.618 "config": [ 00:06:51.618 { 00:06:51.618 "method": "sock_set_default_impl", 00:06:51.618 "params": { 00:06:51.618 "impl_name": "posix" 00:06:51.618 } 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "method": "sock_impl_set_options", 00:06:51.618 "params": { 00:06:51.618 "impl_name": "ssl", 00:06:51.618 "recv_buf_size": 4096, 00:06:51.618 "send_buf_size": 4096, 00:06:51.618 "enable_recv_pipe": true, 00:06:51.618 "enable_quickack": false, 00:06:51.618 "enable_placement_id": 0, 00:06:51.618 "enable_zerocopy_send_server": true, 00:06:51.618 "enable_zerocopy_send_client": false, 00:06:51.618 "zerocopy_threshold": 0, 00:06:51.618 "tls_version": 0, 00:06:51.618 "enable_ktls": false 00:06:51.618 } 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "method": "sock_impl_set_options", 00:06:51.618 "params": { 00:06:51.618 "impl_name": "posix", 00:06:51.618 "recv_buf_size": 2097152, 00:06:51.618 "send_buf_size": 2097152, 00:06:51.618 "enable_recv_pipe": true, 00:06:51.618 "enable_quickack": false, 00:06:51.618 "enable_placement_id": 0, 00:06:51.618 "enable_zerocopy_send_server": true, 00:06:51.618 "enable_zerocopy_send_client": false, 00:06:51.618 "zerocopy_threshold": 0, 00:06:51.618 "tls_version": 0, 00:06:51.618 "enable_ktls": false 00:06:51.618 } 00:06:51.618 } 00:06:51.618 ] 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "subsystem": "vmd", 00:06:51.618 "config": [] 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "subsystem": "accel", 00:06:51.618 "config": [ 00:06:51.618 { 00:06:51.618 "method": "accel_set_options", 00:06:51.618 "params": { 00:06:51.618 "small_cache_size": 128, 00:06:51.618 "large_cache_size": 16, 00:06:51.618 "task_count": 2048, 00:06:51.618 "sequence_count": 2048, 00:06:51.618 "buf_count": 2048 00:06:51.618 } 00:06:51.618 } 00:06:51.618 ] 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "subsystem": "bdev", 00:06:51.618 "config": [ 00:06:51.618 { 00:06:51.618 "method": "bdev_set_options", 00:06:51.618 "params": { 00:06:51.618 "bdev_io_pool_size": 65535, 00:06:51.618 "bdev_io_cache_size": 256, 00:06:51.618 "bdev_auto_examine": true, 00:06:51.618 "iobuf_small_cache_size": 128, 00:06:51.618 "iobuf_large_cache_size": 16 00:06:51.618 } 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "method": "bdev_raid_set_options", 00:06:51.618 "params": { 00:06:51.618 "process_window_size_kb": 1024, 00:06:51.618 "process_max_bandwidth_mb_sec": 0 00:06:51.618 } 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "method": "bdev_iscsi_set_options", 00:06:51.618 "params": { 00:06:51.618 "timeout_sec": 30 00:06:51.618 } 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "method": "bdev_nvme_set_options", 00:06:51.618 "params": { 00:06:51.618 "action_on_timeout": "none", 00:06:51.618 "timeout_us": 0, 00:06:51.618 "timeout_admin_us": 0, 00:06:51.618 "keep_alive_timeout_ms": 10000, 00:06:51.618 "arbitration_burst": 0, 00:06:51.618 "low_priority_weight": 0, 00:06:51.618 "medium_priority_weight": 0, 00:06:51.618 "high_priority_weight": 0, 00:06:51.618 "nvme_adminq_poll_period_us": 10000, 00:06:51.618 "nvme_ioq_poll_period_us": 0, 00:06:51.618 "io_queue_requests": 0, 00:06:51.618 "delay_cmd_submit": true, 00:06:51.618 "transport_retry_count": 4, 00:06:51.618 "bdev_retry_count": 3, 00:06:51.618 "transport_ack_timeout": 0, 00:06:51.618 "ctrlr_loss_timeout_sec": 0, 00:06:51.618 "reconnect_delay_sec": 0, 00:06:51.618 "fast_io_fail_timeout_sec": 0, 00:06:51.618 "disable_auto_failback": false, 00:06:51.618 "generate_uuids": false, 00:06:51.618 "transport_tos": 0, 00:06:51.618 "nvme_error_stat": false, 00:06:51.618 "rdma_srq_size": 0, 00:06:51.618 "io_path_stat": false, 00:06:51.618 "allow_accel_sequence": false, 00:06:51.618 "rdma_max_cq_size": 0, 00:06:51.618 "rdma_cm_event_timeout_ms": 0, 00:06:51.618 "dhchap_digests": [ 00:06:51.618 "sha256", 00:06:51.618 "sha384", 00:06:51.618 "sha512" 00:06:51.618 ], 00:06:51.618 "dhchap_dhgroups": [ 00:06:51.618 "null", 00:06:51.618 "ffdhe2048", 00:06:51.618 "ffdhe3072", 00:06:51.618 "ffdhe4096", 00:06:51.618 "ffdhe6144", 00:06:51.618 "ffdhe8192" 00:06:51.618 ] 00:06:51.618 } 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "method": "bdev_nvme_set_hotplug", 00:06:51.618 "params": { 00:06:51.618 "period_us": 100000, 00:06:51.618 "enable": false 00:06:51.618 } 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "method": "bdev_wait_for_examine" 00:06:51.618 } 00:06:51.618 ] 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "subsystem": "scsi", 00:06:51.618 "config": null 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "subsystem": "scheduler", 00:06:51.618 "config": [ 00:06:51.618 { 00:06:51.618 "method": "framework_set_scheduler", 00:06:51.618 "params": { 00:06:51.618 "name": "static" 00:06:51.618 } 00:06:51.618 } 00:06:51.618 ] 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "subsystem": "vhost_scsi", 00:06:51.618 "config": [] 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "subsystem": "vhost_blk", 00:06:51.618 "config": [] 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "subsystem": "ublk", 00:06:51.618 "config": [] 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "subsystem": "nbd", 00:06:51.618 "config": [] 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "subsystem": "nvmf", 00:06:51.618 "config": [ 00:06:51.618 { 00:06:51.618 "method": "nvmf_set_config", 00:06:51.618 "params": { 00:06:51.618 "discovery_filter": "match_any", 00:06:51.618 "admin_cmd_passthru": { 00:06:51.618 "identify_ctrlr": false 00:06:51.618 } 00:06:51.618 } 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "method": "nvmf_set_max_subsystems", 00:06:51.618 "params": { 00:06:51.618 "max_subsystems": 1024 00:06:51.618 } 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "method": "nvmf_set_crdt", 00:06:51.618 "params": { 00:06:51.618 "crdt1": 0, 00:06:51.618 "crdt2": 0, 00:06:51.618 "crdt3": 0 00:06:51.618 } 00:06:51.618 }, 00:06:51.618 { 00:06:51.618 "method": "nvmf_create_transport", 00:06:51.618 "params": { 00:06:51.618 "trtype": "TCP", 00:06:51.618 "max_queue_depth": 128, 00:06:51.618 "max_io_qpairs_per_ctrlr": 127, 00:06:51.618 "in_capsule_data_size": 4096, 00:06:51.618 "max_io_size": 131072, 00:06:51.618 "io_unit_size": 131072, 00:06:51.619 "max_aq_depth": 128, 00:06:51.619 "num_shared_buffers": 511, 00:06:51.619 "buf_cache_size": 4294967295, 00:06:51.619 "dif_insert_or_strip": false, 00:06:51.619 "zcopy": false, 00:06:51.619 "c2h_success": true, 00:06:51.619 "sock_priority": 0, 00:06:51.619 "abort_timeout_sec": 1, 00:06:51.619 "ack_timeout": 0, 00:06:51.619 "data_wr_pool_size": 0 00:06:51.619 } 00:06:51.619 } 00:06:51.619 ] 00:06:51.619 }, 00:06:51.619 { 00:06:51.619 "subsystem": "iscsi", 00:06:51.619 "config": [ 00:06:51.619 { 00:06:51.619 "method": "iscsi_set_options", 00:06:51.619 "params": { 00:06:51.619 "node_base": "iqn.2016-06.io.spdk", 00:06:51.619 "max_sessions": 128, 00:06:51.619 "max_connections_per_session": 2, 00:06:51.619 "max_queue_depth": 64, 00:06:51.619 "default_time2wait": 2, 00:06:51.619 "default_time2retain": 20, 00:06:51.619 "first_burst_length": 8192, 00:06:51.619 "immediate_data": true, 00:06:51.619 "allow_duplicated_isid": false, 00:06:51.619 "error_recovery_level": 0, 00:06:51.619 "nop_timeout": 60, 00:06:51.619 "nop_in_interval": 30, 00:06:51.619 "disable_chap": false, 00:06:51.619 "require_chap": false, 00:06:51.619 "mutual_chap": false, 00:06:51.619 "chap_group": 0, 00:06:51.619 "max_large_datain_per_connection": 64, 00:06:51.619 "max_r2t_per_connection": 4, 00:06:51.619 "pdu_pool_size": 36864, 00:06:51.619 "immediate_data_pool_size": 16384, 00:06:51.619 "data_out_pool_size": 2048 00:06:51.619 } 00:06:51.619 } 00:06:51.619 ] 00:06:51.619 } 00:06:51.619 ] 00:06:51.619 } 00:06:51.619 10:48:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:51.619 10:48:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3468833 00:06:51.619 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3468833 ']' 00:06:51.619 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3468833 00:06:51.619 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:51.619 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.619 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3468833 00:06:51.877 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.877 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.877 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3468833' 00:06:51.877 killing process with pid 3468833 00:06:51.877 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3468833 00:06:51.877 10:48:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3468833 00:06:55.160 10:49:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3469775 00:06:55.160 10:49:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:55.160 10:49:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:07:00.422 10:49:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3469775 00:07:00.422 10:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3469775 ']' 00:07:00.422 10:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3469775 00:07:00.422 10:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:00.422 10:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.422 10:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3469775 00:07:00.422 10:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.422 10:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.422 10:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3469775' 00:07:00.422 killing process with pid 3469775 00:07:00.422 10:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3469775 00:07:00.423 10:49:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3469775 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/log.txt 00:07:03.709 00:07:03.709 real 0m13.816s 00:07:03.709 user 0m13.135s 00:07:03.709 sys 0m1.189s 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:03.709 ************************************ 00:07:03.709 END TEST skip_rpc_with_json 00:07:03.709 ************************************ 00:07:03.709 10:49:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:03.709 10:49:10 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.709 10:49:10 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.709 10:49:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.709 ************************************ 00:07:03.709 START TEST skip_rpc_with_delay 00:07:03.709 ************************************ 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:03.709 [2024-07-25 10:49:10.688904] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:03.709 [2024-07-25 10:49:10.689019] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.709 00:07:03.709 real 0m0.197s 00:07:03.709 user 0m0.102s 00:07:03.709 sys 0m0.093s 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.709 10:49:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:03.709 ************************************ 00:07:03.709 END TEST skip_rpc_with_delay 00:07:03.709 ************************************ 00:07:03.709 10:49:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:03.709 10:49:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:03.709 10:49:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:03.709 10:49:10 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.709 10:49:10 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.709 10:49:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.967 ************************************ 00:07:03.967 START TEST exit_on_failed_rpc_init 00:07:03.967 ************************************ 00:07:03.967 10:49:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:03.967 10:49:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3471402 00:07:03.967 10:49:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3471402 00:07:03.967 10:49:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.967 10:49:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3471402 ']' 00:07:03.967 10:49:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.967 10:49:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.967 10:49:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.967 10:49:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.967 10:49:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:03.967 [2024-07-25 10:49:10.947869] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:03.967 [2024-07-25 10:49:10.947958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471402 ] 00:07:03.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.967 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:03.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.967 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:03.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.967 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:03.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.967 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:03.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.967 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:03.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.967 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:03.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.967 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:03.967 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:03.968 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:03.968 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:04.226 [2024-07-25 10:49:11.147656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.485 [2024-07-25 10:49:11.431310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:05.860 10:49:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:05.860 [2024-07-25 10:49:12.764863] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:05.860 [2024-07-25 10:49:12.764950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471681 ] 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.860 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:05.860 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:05.861 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:05.861 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:05.861 [2024-07-25 10:49:12.947931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.118 [2024-07-25 10:49:13.217897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.118 [2024-07-25 10:49:13.218013] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:06.118 [2024-07-25 10:49:13.218035] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:06.118 [2024-07-25 10:49:13.218053] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3471402 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3471402 ']' 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3471402 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.693 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3471402 00:07:06.951 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.951 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.951 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3471402' 00:07:06.951 killing process with pid 3471402 00:07:06.951 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3471402 00:07:06.951 10:49:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3471402 00:07:10.236 00:07:10.236 real 0m6.295s 00:07:10.236 user 0m6.974s 00:07:10.236 sys 0m0.813s 00:07:10.236 10:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.236 10:49:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:10.236 ************************************ 00:07:10.236 END TEST exit_on_failed_rpc_init 00:07:10.236 ************************************ 00:07:10.236 10:49:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/config.json 00:07:10.236 00:07:10.236 real 0m29.093s 00:07:10.236 user 0m28.163s 00:07:10.236 sys 0m2.970s 00:07:10.236 10:49:17 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.236 10:49:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.236 ************************************ 00:07:10.236 END TEST skip_rpc 00:07:10.236 ************************************ 00:07:10.236 10:49:17 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:10.236 10:49:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.236 10:49:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.236 10:49:17 -- common/autotest_common.sh@10 -- # set +x 00:07:10.236 ************************************ 00:07:10.236 START TEST rpc_client 00:07:10.236 ************************************ 00:07:10.236 10:49:17 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:10.494 * Looking for test storage... 00:07:10.494 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:07:10.494 10:49:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:10.494 OK 00:07:10.494 10:49:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:10.494 00:07:10.494 real 0m0.187s 00:07:10.494 user 0m0.077s 00:07:10.494 sys 0m0.119s 00:07:10.494 10:49:17 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.494 10:49:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:10.494 ************************************ 00:07:10.494 END TEST rpc_client 00:07:10.494 ************************************ 00:07:10.494 10:49:17 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:07:10.494 10:49:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.494 10:49:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.494 10:49:17 -- common/autotest_common.sh@10 -- # set +x 00:07:10.494 ************************************ 00:07:10.494 START TEST json_config 00:07:10.494 ************************************ 00:07:10.494 10:49:17 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:07:10.752 10:49:17 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:07:10.752 10:49:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:07:10.753 10:49:17 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.753 10:49:17 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.753 10:49:17 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.753 10:49:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.753 10:49:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.753 10:49:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.753 10:49:17 json_config -- paths/export.sh@5 -- # export PATH 00:07:10.753 10:49:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@47 -- # : 0 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:10.753 10:49:17 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:07:10.753 INFO: JSON configuration test init 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:07:10.753 10:49:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.753 10:49:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:07:10.753 10:49:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.753 10:49:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.753 10:49:17 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:07:10.753 10:49:17 json_config -- json_config/common.sh@9 -- # local app=target 00:07:10.753 10:49:17 json_config -- json_config/common.sh@10 -- # shift 00:07:10.753 10:49:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:10.753 10:49:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:10.753 10:49:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:10.753 10:49:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:10.753 10:49:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:10.753 10:49:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3472583 00:07:10.753 10:49:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:10.753 Waiting for target to run... 00:07:10.753 10:49:17 json_config -- json_config/common.sh@25 -- # waitforlisten 3472583 /var/tmp/spdk_tgt.sock 00:07:10.753 10:49:17 json_config -- common/autotest_common.sh@831 -- # '[' -z 3472583 ']' 00:07:10.753 10:49:17 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:10.753 10:49:17 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:10.753 10:49:17 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.753 10:49:17 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:10.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:10.753 10:49:17 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.753 10:49:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:10.753 [2024-07-25 10:49:17.786052] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:10.753 [2024-07-25 10:49:17.786179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472583 ] 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:11.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:11.320 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:11.320 [2024-07-25 10:49:18.391240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.578 [2024-07-25 10:49:18.664513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.573 10:49:19 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.573 10:49:19 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:12.573 10:49:19 json_config -- json_config/common.sh@26 -- # echo '' 00:07:12.573 00:07:12.573 10:49:19 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:07:12.573 10:49:19 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:07:12.573 10:49:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:12.573 10:49:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:12.573 10:49:19 json_config -- json_config/json_config.sh@99 -- # [[ 1 -eq 1 ]] 00:07:12.573 10:49:19 json_config -- json_config/json_config.sh@100 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:07:12.573 10:49:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:07:12.573 10:49:19 json_config -- json_config/json_config.sh@101 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:07:12.573 10:49:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:07:12.832 [2024-07-25 10:49:19.788241] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:07:12.832 10:49:19 json_config -- json_config/json_config.sh@102 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:07:12.832 10:49:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:07:13.090 [2024-07-25 10:49:20.020881] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:07:13.090 10:49:20 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:07:13.090 10:49:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:13.090 10:49:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:13.090 10:49:20 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:13.090 10:49:20 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:07:13.090 10:49:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:13.658 [2024-07-25 10:49:20.581072] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:07:20.221 10:49:26 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:07:20.221 10:49:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:20.221 10:49:26 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:20.221 10:49:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.221 10:49:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:20.221 10:49:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:20.221 10:49:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:20.221 10:49:26 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:20.221 10:49:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:20.221 10:49:26 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:20.221 10:49:27 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:20.221 10:49:27 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:20.221 10:49:27 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:07:20.221 10:49:27 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:07:20.221 10:49:27 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:07:20.221 10:49:27 json_config -- json_config/json_config.sh@51 -- # sort 00:07:20.221 10:49:27 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:07:20.221 10:49:27 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:07:20.222 10:49:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:20.222 10:49:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@59 -- # return 0 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@282 -- # [[ 1 -eq 1 ]] 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@283 -- # create_bdev_subsystem_config 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@109 -- # timing_enter create_bdev_subsystem_config 00:07:20.222 10:49:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:20.222 10:49:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@111 -- # expected_notifications=() 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@111 -- # local expected_notifications 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@115 -- # expected_notifications+=($(get_notifications)) 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@115 -- # get_notifications 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:07:20.222 10:49:27 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:20.222 10:49:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:20.505 10:49:27 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:07:20.505 10:49:27 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:20.505 10:49:27 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:20.505 10:49:27 json_config -- json_config/json_config.sh@117 -- # [[ 1 -eq 1 ]] 00:07:20.505 10:49:27 json_config -- json_config/json_config.sh@118 -- # local lvol_store_base_bdev=Nvme0n1 00:07:20.505 10:49:27 json_config -- json_config/json_config.sh@120 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:20.505 10:49:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:20.763 Nvme0n1p0 Nvme0n1p1 00:07:20.763 10:49:27 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:20.763 10:49:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:21.021 [2024-07-25 10:49:27.884207] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:21.021 [2024-07-25 10:49:27.884275] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:21.021 00:07:21.021 10:49:27 json_config -- json_config/json_config.sh@122 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:21.021 10:49:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:21.021 Malloc3 00:07:21.280 10:49:28 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:21.280 10:49:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:21.280 [2024-07-25 10:49:28.348159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:21.280 [2024-07-25 10:49:28.348228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:21.280 [2024-07-25 10:49:28.348265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:07:21.280 [2024-07-25 10:49:28.348281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:21.280 [2024-07-25 10:49:28.351092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:21.280 [2024-07-25 10:49:28.351131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:21.280 PTBdevFromMalloc3 00:07:21.280 10:49:28 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:21.280 10:49:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:21.538 Null0 00:07:21.538 10:49:28 json_config -- json_config/json_config.sh@127 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:21.538 10:49:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:21.796 Malloc0 00:07:21.796 10:49:28 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:21.796 10:49:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:22.054 Malloc1 00:07:22.054 10:49:29 json_config -- json_config/json_config.sh@141 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:22.054 10:49:29 json_config -- json_config/json_config.sh@144 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:22.312 102400+0 records in 00:07:22.312 102400+0 records out 00:07:22.312 104857600 bytes (105 MB, 100 MiB) copied, 0.27959 s, 375 MB/s 00:07:22.313 10:49:29 json_config -- json_config/json_config.sh@145 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:22.313 10:49:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:22.571 aio_disk 00:07:22.571 10:49:29 json_config -- json_config/json_config.sh@146 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:22.571 10:49:29 json_config -- json_config/json_config.sh@151 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:22.571 10:49:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:26.760 7593ea99-42de-465e-9eb6-81fa276c2f36 00:07:26.760 10:49:33 json_config -- json_config/json_config.sh@158 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:26.760 10:49:33 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:26.760 10:49:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:27.019 10:49:34 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:27.019 10:49:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:27.279 10:49:34 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:27.279 10:49:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:27.538 10:49:34 json_config -- json_config/json_config.sh@158 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:27.538 10:49:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:27.797 10:49:34 json_config -- json_config/json_config.sh@161 -- # [[ 1 -eq 1 ]] 00:07:27.797 10:49:34 json_config -- json_config/json_config.sh@162 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:07:27.797 10:49:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:07:28.057 MallocForCryptoBdev 00:07:28.057 10:49:34 json_config -- json_config/json_config.sh@163 -- # lspci -d:37c8 00:07:28.057 10:49:34 json_config -- json_config/json_config.sh@163 -- # wc -l 00:07:28.057 10:49:34 json_config -- json_config/json_config.sh@163 -- # [[ 5 -eq 0 ]] 00:07:28.057 10:49:34 json_config -- json_config/json_config.sh@166 -- # local crypto_driver=crypto_qat 00:07:28.057 10:49:34 json_config -- json_config/json_config.sh@169 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:07:28.057 10:49:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:07:28.316 [2024-07-25 10:49:35.210260] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:07:28.316 CryptoMallocBdev 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@173 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@176 -- # [[ 0 -eq 1 ]] 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@182 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:5849b6e5-d346-4349-9097-57f36c181adb bdev_register:cf2b05c8-0dbf-43b4-9399-f7cdb12cb474 bdev_register:9065932f-1573-4032-a8f4-12d8e0293b7b bdev_register:f4134799-6606-4b47-9e8e-35f305da96fb bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@71 -- # local events_to_check 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@72 -- # local recorded_events 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@75 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@75 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:5849b6e5-d346-4349-9097-57f36c181adb bdev_register:cf2b05c8-0dbf-43b4-9399-f7cdb12cb474 bdev_register:9065932f-1573-4032-a8f4-12d8e0293b7b bdev_register:f4134799-6606-4b47-9e8e-35f305da96fb bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@75 -- # sort 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@76 -- # recorded_events=($(get_notifications | sort)) 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@76 -- # get_notifications 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@76 -- # sort 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@63 -- # local ev_type ev_ctx event_id 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@62 -- # tgt_rpc notify_get_notifications -i 0 00:07:28.316 10:49:35 json_config -- json_config/json_config.sh@62 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:28.316 10:49:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p1 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Nvme0n1p0 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc3 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:PTBdevFromMalloc3 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Null0 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p2 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p1 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc0p0 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.576 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:Malloc1 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:aio_disk 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:5849b6e5-d346-4349-9097-57f36c181adb 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:cf2b05c8-0dbf-43b4-9399-f7cdb12cb474 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:9065932f-1573-4032-a8f4-12d8e0293b7b 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:f4134799-6606-4b47-9e8e-35f305da96fb 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:MallocForCryptoBdev 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@66 -- # echo bdev_register:CryptoMallocBdev 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # IFS=: 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@65 -- # read -r ev_type ev_ctx event_id 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@78 -- # [[ bdev_register:5849b6e5-d346-4349-9097-57f36c181adb bdev_register:9065932f-1573-4032-a8f4-12d8e0293b7b bdev_register:aio_disk bdev_register:cf2b05c8-0dbf-43b4-9399-f7cdb12cb474 bdev_register:CryptoMallocBdev bdev_register:f4134799-6606-4b47-9e8e-35f305da96fb bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\8\4\9\b\6\e\5\-\d\3\4\6\-\4\3\4\9\-\9\0\9\7\-\5\7\f\3\6\c\1\8\1\a\d\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\0\6\5\9\3\2\f\-\1\5\7\3\-\4\0\3\2\-\a\8\f\4\-\1\2\d\8\e\0\2\9\3\b\7\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\c\f\2\b\0\5\c\8\-\0\d\b\f\-\4\3\b\4\-\9\3\9\9\-\f\7\c\d\b\1\2\c\b\4\7\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\4\1\3\4\7\9\9\-\6\6\0\6\-\4\b\4\7\-\9\e\8\e\-\3\5\f\3\0\5\d\a\9\6\f\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@90 -- # cat 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@90 -- # printf ' %s\n' bdev_register:5849b6e5-d346-4349-9097-57f36c181adb bdev_register:9065932f-1573-4032-a8f4-12d8e0293b7b bdev_register:aio_disk bdev_register:cf2b05c8-0dbf-43b4-9399-f7cdb12cb474 bdev_register:CryptoMallocBdev bdev_register:f4134799-6606-4b47-9e8e-35f305da96fb bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:07:28.577 Expected events matched: 00:07:28.577 bdev_register:5849b6e5-d346-4349-9097-57f36c181adb 00:07:28.577 bdev_register:9065932f-1573-4032-a8f4-12d8e0293b7b 00:07:28.577 bdev_register:aio_disk 00:07:28.577 bdev_register:cf2b05c8-0dbf-43b4-9399-f7cdb12cb474 00:07:28.577 bdev_register:CryptoMallocBdev 00:07:28.577 bdev_register:f4134799-6606-4b47-9e8e-35f305da96fb 00:07:28.577 bdev_register:Malloc0 00:07:28.577 bdev_register:Malloc0p0 00:07:28.577 bdev_register:Malloc0p1 00:07:28.577 bdev_register:Malloc0p2 00:07:28.577 bdev_register:Malloc1 00:07:28.577 bdev_register:Malloc3 00:07:28.577 bdev_register:MallocForCryptoBdev 00:07:28.577 bdev_register:Null0 00:07:28.577 bdev_register:Nvme0n1 00:07:28.577 bdev_register:Nvme0n1p0 00:07:28.577 bdev_register:Nvme0n1p1 00:07:28.577 bdev_register:PTBdevFromMalloc3 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@184 -- # timing_exit create_bdev_subsystem_config 00:07:28.577 10:49:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:28.577 10:49:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:07:28.577 10:49:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:28.577 10:49:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:07:28.577 10:49:35 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:28.577 10:49:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:28.836 MallocBdevForConfigChangeCheck 00:07:28.836 10:49:35 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:07:28.836 10:49:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:28.836 10:49:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:28.836 10:49:35 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:07:28.836 10:49:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:29.095 10:49:36 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:07:29.095 INFO: shutting down applications... 00:07:29.095 10:49:36 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:07:29.095 10:49:36 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:07:29.095 10:49:36 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:07:29.095 10:49:36 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:29.354 [2024-07-25 10:49:36.409765] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:32.668 Calling clear_iscsi_subsystem 00:07:32.668 Calling clear_nvmf_subsystem 00:07:32.668 Calling clear_nbd_subsystem 00:07:32.668 Calling clear_ublk_subsystem 00:07:32.668 Calling clear_vhost_blk_subsystem 00:07:32.668 Calling clear_vhost_scsi_subsystem 00:07:32.668 Calling clear_bdev_subsystem 00:07:32.668 10:49:39 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:07:32.668 10:49:39 json_config -- json_config/json_config.sh@347 -- # count=100 00:07:32.668 10:49:39 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:07:32.668 10:49:39 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:32.668 10:49:39 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:32.668 10:49:39 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:32.668 10:49:39 json_config -- json_config/json_config.sh@349 -- # break 00:07:32.668 10:49:39 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:07:32.668 10:49:39 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:07:32.668 10:49:39 json_config -- json_config/common.sh@31 -- # local app=target 00:07:32.668 10:49:39 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:32.668 10:49:39 json_config -- json_config/common.sh@35 -- # [[ -n 3472583 ]] 00:07:32.668 10:49:39 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3472583 00:07:32.668 10:49:39 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:32.668 10:49:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:32.668 10:49:39 json_config -- json_config/common.sh@41 -- # kill -0 3472583 00:07:32.668 10:49:39 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:33.236 10:49:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:33.236 10:49:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:33.236 10:49:40 json_config -- json_config/common.sh@41 -- # kill -0 3472583 00:07:33.236 10:49:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:33.495 10:49:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:33.495 10:49:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:33.495 10:49:40 json_config -- json_config/common.sh@41 -- # kill -0 3472583 00:07:33.495 10:49:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:34.062 10:49:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:34.062 10:49:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:34.062 10:49:41 json_config -- json_config/common.sh@41 -- # kill -0 3472583 00:07:34.062 10:49:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:34.630 10:49:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:34.630 10:49:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:34.630 10:49:41 json_config -- json_config/common.sh@41 -- # kill -0 3472583 00:07:34.630 10:49:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:35.197 10:49:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:35.197 10:49:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:35.197 10:49:42 json_config -- json_config/common.sh@41 -- # kill -0 3472583 00:07:35.197 10:49:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:35.197 10:49:42 json_config -- json_config/common.sh@43 -- # break 00:07:35.197 10:49:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:35.197 10:49:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:35.197 SPDK target shutdown done 00:07:35.197 10:49:42 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:07:35.197 INFO: relaunching applications... 00:07:35.197 10:49:42 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:35.197 10:49:42 json_config -- json_config/common.sh@9 -- # local app=target 00:07:35.197 10:49:42 json_config -- json_config/common.sh@10 -- # shift 00:07:35.197 10:49:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:35.197 10:49:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:35.197 10:49:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:35.197 10:49:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:35.197 10:49:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:35.197 10:49:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3476964 00:07:35.197 10:49:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:35.197 Waiting for target to run... 00:07:35.197 10:49:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:35.197 10:49:42 json_config -- json_config/common.sh@25 -- # waitforlisten 3476964 /var/tmp/spdk_tgt.sock 00:07:35.197 10:49:42 json_config -- common/autotest_common.sh@831 -- # '[' -z 3476964 ']' 00:07:35.197 10:49:42 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:35.197 10:49:42 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.197 10:49:42 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:35.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:35.197 10:49:42 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.197 10:49:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:35.197 [2024-07-25 10:49:42.229765] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:35.197 [2024-07-25 10:49:42.229881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3476964 ] 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:35.765 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:35.765 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:35.765 [2024-07-25 10:49:42.832550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.024 [2024-07-25 10:49:43.105288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.284 [2024-07-25 10:49:43.159986] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:07:36.284 [2024-07-25 10:49:43.168028] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:07:36.284 [2024-07-25 10:49:43.176041] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:07:36.542 [2024-07-25 10:49:43.537757] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:07:40.730 [2024-07-25 10:49:46.970923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:40.730 [2024-07-25 10:49:46.970992] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:40.730 [2024-07-25 10:49:46.971012] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:40.730 [2024-07-25 10:49:46.978945] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:40.730 [2024-07-25 10:49:46.978993] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:40.730 [2024-07-25 10:49:46.986951] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:40.730 [2024-07-25 10:49:46.986992] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:40.730 [2024-07-25 10:49:46.994993] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:07:40.730 [2024-07-25 10:49:46.995056] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:07:40.730 [2024-07-25 10:49:46.995078] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:43.262 [2024-07-25 10:49:49.980740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:43.262 [2024-07-25 10:49:49.980812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.262 [2024-07-25 10:49:49.980835] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:07:43.262 [2024-07-25 10:49:49.980850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.262 [2024-07-25 10:49:49.981430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.262 [2024-07-25 10:49:49.981459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:43.829 10:49:50 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.829 10:49:50 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:43.829 10:49:50 json_config -- json_config/common.sh@26 -- # echo '' 00:07:43.829 00:07:43.829 10:49:50 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:07:43.829 10:49:50 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:43.829 INFO: Checking if target configuration is the same... 00:07:43.829 10:49:50 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:43.829 10:49:50 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:07:43.829 10:49:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:43.829 + '[' 2 -ne 2 ']' 00:07:43.829 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:43.829 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:07:43.829 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:07:43.829 +++ basename /dev/fd/62 00:07:43.829 ++ mktemp /tmp/62.XXX 00:07:43.829 + tmp_file_1=/tmp/62.kcs 00:07:43.829 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:43.829 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:43.829 + tmp_file_2=/tmp/spdk_tgt_config.json.Kxe 00:07:43.829 + ret=0 00:07:43.829 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:44.089 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:44.089 + diff -u /tmp/62.kcs /tmp/spdk_tgt_config.json.Kxe 00:07:44.089 + echo 'INFO: JSON config files are the same' 00:07:44.089 INFO: JSON config files are the same 00:07:44.089 + rm /tmp/62.kcs /tmp/spdk_tgt_config.json.Kxe 00:07:44.089 + exit 0 00:07:44.089 10:49:51 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:07:44.089 10:49:51 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:44.089 INFO: changing configuration and checking if this can be detected... 00:07:44.089 10:49:51 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:44.089 10:49:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:44.349 10:49:51 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:44.349 10:49:51 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:07:44.349 10:49:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:44.349 + '[' 2 -ne 2 ']' 00:07:44.349 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:44.349 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:07:44.349 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:07:44.349 +++ basename /dev/fd/62 00:07:44.349 ++ mktemp /tmp/62.XXX 00:07:44.349 + tmp_file_1=/tmp/62.8kv 00:07:44.349 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:44.349 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:44.349 + tmp_file_2=/tmp/spdk_tgt_config.json.SQr 00:07:44.349 + ret=0 00:07:44.349 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:44.607 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:44.866 + diff -u /tmp/62.8kv /tmp/spdk_tgt_config.json.SQr 00:07:44.866 + ret=1 00:07:44.866 + echo '=== Start of file: /tmp/62.8kv ===' 00:07:44.866 + cat /tmp/62.8kv 00:07:44.866 + echo '=== End of file: /tmp/62.8kv ===' 00:07:44.866 + echo '' 00:07:44.866 + echo '=== Start of file: /tmp/spdk_tgt_config.json.SQr ===' 00:07:44.866 + cat /tmp/spdk_tgt_config.json.SQr 00:07:44.866 + echo '=== End of file: /tmp/spdk_tgt_config.json.SQr ===' 00:07:44.866 + echo '' 00:07:44.866 + rm /tmp/62.8kv /tmp/spdk_tgt_config.json.SQr 00:07:44.866 + exit 1 00:07:44.866 10:49:51 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:07:44.866 INFO: configuration change detected. 00:07:44.866 10:49:51 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:07:44.866 10:49:51 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:07:44.866 10:49:51 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:44.866 10:49:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:44.866 10:49:51 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:07:44.866 10:49:51 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:07:44.866 10:49:51 json_config -- json_config/json_config.sh@321 -- # [[ -n 3476964 ]] 00:07:44.866 10:49:51 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:07:44.866 10:49:51 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:07:44.866 10:49:51 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:44.866 10:49:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:44.866 10:49:51 json_config -- json_config/json_config.sh@190 -- # [[ 1 -eq 1 ]] 00:07:44.866 10:49:51 json_config -- json_config/json_config.sh@191 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:07:44.866 10:49:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:07:45.125 10:49:52 json_config -- json_config/json_config.sh@192 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:07:45.125 10:49:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:07:45.384 10:49:52 json_config -- json_config/json_config.sh@193 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:07:45.384 10:49:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:07:45.384 10:49:52 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:07:45.384 10:49:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:07:45.643 10:49:52 json_config -- json_config/json_config.sh@197 -- # uname -s 00:07:45.643 10:49:52 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:07:45.643 10:49:52 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:07:45.643 10:49:52 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:07:45.643 10:49:52 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:07:45.643 10:49:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:45.643 10:49:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:45.643 10:49:52 json_config -- json_config/json_config.sh@327 -- # killprocess 3476964 00:07:45.643 10:49:52 json_config -- common/autotest_common.sh@950 -- # '[' -z 3476964 ']' 00:07:45.643 10:49:52 json_config -- common/autotest_common.sh@954 -- # kill -0 3476964 00:07:45.902 10:49:52 json_config -- common/autotest_common.sh@955 -- # uname 00:07:45.902 10:49:52 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.902 10:49:52 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3476964 00:07:45.902 10:49:52 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.902 10:49:52 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.902 10:49:52 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3476964' 00:07:45.902 killing process with pid 3476964 00:07:45.902 10:49:52 json_config -- common/autotest_common.sh@969 -- # kill 3476964 00:07:45.902 10:49:52 json_config -- common/autotest_common.sh@974 -- # wait 3476964 00:07:51.174 10:49:57 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:07:51.174 10:49:57 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:07:51.174 10:49:57 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:51.174 10:49:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:51.174 10:49:57 json_config -- json_config/json_config.sh@332 -- # return 0 00:07:51.175 10:49:57 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:07:51.175 INFO: Success 00:07:51.175 00:07:51.175 real 0m40.122s 00:07:51.175 user 0m44.425s 00:07:51.175 sys 0m4.492s 00:07:51.175 10:49:57 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.175 10:49:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:51.175 ************************************ 00:07:51.175 END TEST json_config 00:07:51.175 ************************************ 00:07:51.175 10:49:57 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:51.175 10:49:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.175 10:49:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.175 10:49:57 -- common/autotest_common.sh@10 -- # set +x 00:07:51.175 ************************************ 00:07:51.175 START TEST json_config_extra_key 00:07:51.175 ************************************ 00:07:51.175 10:49:57 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:07:51.175 10:49:57 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.175 10:49:57 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.175 10:49:57 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.175 10:49:57 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.175 10:49:57 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.175 10:49:57 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.175 10:49:57 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:51.175 10:49:57 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.175 10:49:57 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/common.sh 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:51.175 INFO: launching applications... 00:07:51.175 10:49:57 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:07:51.175 10:49:57 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:51.175 10:49:57 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:51.175 10:49:57 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:51.175 10:49:57 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:51.175 10:49:57 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:51.175 10:49:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:51.175 10:49:57 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:51.175 10:49:57 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3479731 00:07:51.175 10:49:57 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:51.175 Waiting for target to run... 00:07:51.175 10:49:57 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3479731 /var/tmp/spdk_tgt.sock 00:07:51.175 10:49:57 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3479731 ']' 00:07:51.175 10:49:57 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:07:51.175 10:49:57 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:51.175 10:49:57 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.175 10:49:57 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:51.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:51.175 10:49:57 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.175 10:49:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:51.175 [2024-07-25 10:49:57.976456] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:51.175 [2024-07-25 10:49:57.976580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3479731 ] 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.472 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:51.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.473 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:51.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.473 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:51.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.473 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:51.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.473 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:51.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.473 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:51.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.473 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:51.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.473 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:51.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.473 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:51.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.473 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:51.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.473 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:51.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.473 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:51.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:51.473 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:51.473 [2024-07-25 10:49:58.449102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.746 [2024-07-25 10:49:58.714155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.684 10:49:59 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.684 10:49:59 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:07:52.684 10:49:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:52.684 00:07:52.684 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:52.684 INFO: shutting down applications... 00:07:52.684 10:49:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:52.684 10:49:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:52.684 10:49:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:52.684 10:49:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3479731 ]] 00:07:52.684 10:49:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3479731 00:07:52.684 10:49:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:52.684 10:49:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:52.684 10:49:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3479731 00:07:52.684 10:49:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:53.253 10:50:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:53.253 10:50:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:53.253 10:50:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3479731 00:07:53.253 10:50:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:53.821 10:50:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:53.821 10:50:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:53.821 10:50:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3479731 00:07:53.821 10:50:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:54.390 10:50:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:54.390 10:50:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:54.390 10:50:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3479731 00:07:54.390 10:50:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:54.390 10:50:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:54.390 10:50:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:54.390 10:50:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:54.390 SPDK target shutdown done 00:07:54.390 10:50:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:54.390 Success 00:07:54.390 00:07:54.390 real 0m3.475s 00:07:54.390 user 0m2.890s 00:07:54.390 sys 0m0.717s 00:07:54.390 10:50:01 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.390 10:50:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:54.390 ************************************ 00:07:54.390 END TEST json_config_extra_key 00:07:54.390 ************************************ 00:07:54.390 10:50:01 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:54.390 10:50:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.390 10:50:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.390 10:50:01 -- common/autotest_common.sh@10 -- # set +x 00:07:54.390 ************************************ 00:07:54.390 START TEST alias_rpc 00:07:54.390 ************************************ 00:07:54.390 10:50:01 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:54.390 * Looking for test storage... 00:07:54.390 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:07:54.390 10:50:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:54.390 10:50:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3480417 00:07:54.390 10:50:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3480417 00:07:54.390 10:50:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:07:54.390 10:50:01 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3480417 ']' 00:07:54.390 10:50:01 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.390 10:50:01 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.390 10:50:01 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.390 10:50:01 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.390 10:50:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.649 [2024-07-25 10:50:01.549745] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:54.649 [2024-07-25 10:50:01.549865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3480417 ] 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:01.0 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:01.1 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:01.2 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:01.3 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:01.4 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:01.5 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:01.6 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:01.7 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:02.0 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:02.1 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:02.2 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:02.3 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:02.4 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:02.5 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:02.6 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3d:02.7 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3f:01.0 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3f:01.1 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3f:01.2 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3f:01.3 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3f:01.4 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3f:01.5 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.649 EAL: Requested device 0000:3f:01.6 cannot be used 00:07:54.649 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.650 EAL: Requested device 0000:3f:01.7 cannot be used 00:07:54.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.650 EAL: Requested device 0000:3f:02.0 cannot be used 00:07:54.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.650 EAL: Requested device 0000:3f:02.1 cannot be used 00:07:54.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.650 EAL: Requested device 0000:3f:02.2 cannot be used 00:07:54.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.650 EAL: Requested device 0000:3f:02.3 cannot be used 00:07:54.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.650 EAL: Requested device 0000:3f:02.4 cannot be used 00:07:54.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.650 EAL: Requested device 0000:3f:02.5 cannot be used 00:07:54.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.650 EAL: Requested device 0000:3f:02.6 cannot be used 00:07:54.650 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:07:54.650 EAL: Requested device 0000:3f:02.7 cannot be used 00:07:54.908 [2024-07-25 10:50:01.775381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.167 [2024-07-25 10:50:02.035638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.105 10:50:03 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:56.105 10:50:03 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:56.105 10:50:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:56.369 10:50:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3480417 00:07:56.369 10:50:03 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3480417 ']' 00:07:56.369 10:50:03 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3480417 00:07:56.369 10:50:03 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:07:56.369 10:50:03 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.369 10:50:03 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3480417 00:07:56.628 10:50:03 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.628 10:50:03 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.628 10:50:03 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3480417' 00:07:56.628 killing process with pid 3480417 00:07:56.628 10:50:03 alias_rpc -- common/autotest_common.sh@969 -- # kill 3480417 00:07:56.628 10:50:03 alias_rpc -- common/autotest_common.sh@974 -- # wait 3480417 00:07:59.916 00:07:59.916 real 0m5.435s 00:07:59.916 user 0m5.411s 00:07:59.916 sys 0m0.732s 00:07:59.916 10:50:06 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.916 10:50:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.916 ************************************ 00:07:59.916 END TEST alias_rpc 00:07:59.916 ************************************ 00:07:59.916 10:50:06 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:07:59.917 10:50:06 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:59.917 10:50:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.917 10:50:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.917 10:50:06 -- common/autotest_common.sh@10 -- # set +x 00:07:59.917 ************************************ 00:07:59.917 START TEST spdkcli_tcp 00:07:59.917 ************************************ 00:07:59.917 10:50:06 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:59.917 * Looking for test storage... 00:07:59.917 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:07:59.917 10:50:06 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:07:59.917 10:50:06 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:59.917 10:50:06 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:07:59.917 10:50:06 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:59.917 10:50:06 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:59.917 10:50:06 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:59.917 10:50:06 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:59.917 10:50:06 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:59.917 10:50:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:59.917 10:50:06 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3481421 00:07:59.917 10:50:06 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3481421 00:07:59.917 10:50:06 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3481421 ']' 00:07:59.917 10:50:06 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.917 10:50:06 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:59.917 10:50:06 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.917 10:50:06 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:59.917 10:50:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:59.917 10:50:06 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:00.176 [2024-07-25 10:50:07.060053] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:00.176 [2024-07-25 10:50:07.060179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3481421 ] 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:00.176 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:00.176 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:00.176 [2024-07-25 10:50:07.284026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:00.744 [2024-07-25 10:50:07.563728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.744 [2024-07-25 10:50:07.563735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.681 10:50:08 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.681 10:50:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:08:01.681 10:50:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3481693 00:08:01.681 10:50:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:01.681 10:50:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:01.940 [ 00:08:01.940 "bdev_malloc_delete", 00:08:01.940 "bdev_malloc_create", 00:08:01.940 "bdev_null_resize", 00:08:01.940 "bdev_null_delete", 00:08:01.940 "bdev_null_create", 00:08:01.940 "bdev_nvme_cuse_unregister", 00:08:01.940 "bdev_nvme_cuse_register", 00:08:01.940 "bdev_opal_new_user", 00:08:01.940 "bdev_opal_set_lock_state", 00:08:01.940 "bdev_opal_delete", 00:08:01.940 "bdev_opal_get_info", 00:08:01.940 "bdev_opal_create", 00:08:01.940 "bdev_nvme_opal_revert", 00:08:01.940 "bdev_nvme_opal_init", 00:08:01.940 "bdev_nvme_send_cmd", 00:08:01.940 "bdev_nvme_get_path_iostat", 00:08:01.940 "bdev_nvme_get_mdns_discovery_info", 00:08:01.940 "bdev_nvme_stop_mdns_discovery", 00:08:01.940 "bdev_nvme_start_mdns_discovery", 00:08:01.940 "bdev_nvme_set_multipath_policy", 00:08:01.940 "bdev_nvme_set_preferred_path", 00:08:01.940 "bdev_nvme_get_io_paths", 00:08:01.940 "bdev_nvme_remove_error_injection", 00:08:01.940 "bdev_nvme_add_error_injection", 00:08:01.940 "bdev_nvme_get_discovery_info", 00:08:01.940 "bdev_nvme_stop_discovery", 00:08:01.940 "bdev_nvme_start_discovery", 00:08:01.940 "bdev_nvme_get_controller_health_info", 00:08:01.940 "bdev_nvme_disable_controller", 00:08:01.940 "bdev_nvme_enable_controller", 00:08:01.940 "bdev_nvme_reset_controller", 00:08:01.940 "bdev_nvme_get_transport_statistics", 00:08:01.940 "bdev_nvme_apply_firmware", 00:08:01.940 "bdev_nvme_detach_controller", 00:08:01.940 "bdev_nvme_get_controllers", 00:08:01.941 "bdev_nvme_attach_controller", 00:08:01.941 "bdev_nvme_set_hotplug", 00:08:01.941 "bdev_nvme_set_options", 00:08:01.941 "bdev_passthru_delete", 00:08:01.941 "bdev_passthru_create", 00:08:01.941 "bdev_lvol_set_parent_bdev", 00:08:01.941 "bdev_lvol_set_parent", 00:08:01.941 "bdev_lvol_check_shallow_copy", 00:08:01.941 "bdev_lvol_start_shallow_copy", 00:08:01.941 "bdev_lvol_grow_lvstore", 00:08:01.941 "bdev_lvol_get_lvols", 00:08:01.941 "bdev_lvol_get_lvstores", 00:08:01.941 "bdev_lvol_delete", 00:08:01.941 "bdev_lvol_set_read_only", 00:08:01.941 "bdev_lvol_resize", 00:08:01.941 "bdev_lvol_decouple_parent", 00:08:01.941 "bdev_lvol_inflate", 00:08:01.941 "bdev_lvol_rename", 00:08:01.941 "bdev_lvol_clone_bdev", 00:08:01.941 "bdev_lvol_clone", 00:08:01.941 "bdev_lvol_snapshot", 00:08:01.941 "bdev_lvol_create", 00:08:01.941 "bdev_lvol_delete_lvstore", 00:08:01.941 "bdev_lvol_rename_lvstore", 00:08:01.941 "bdev_lvol_create_lvstore", 00:08:01.941 "bdev_raid_set_options", 00:08:01.941 "bdev_raid_remove_base_bdev", 00:08:01.941 "bdev_raid_add_base_bdev", 00:08:01.941 "bdev_raid_delete", 00:08:01.941 "bdev_raid_create", 00:08:01.941 "bdev_raid_get_bdevs", 00:08:01.941 "bdev_error_inject_error", 00:08:01.941 "bdev_error_delete", 00:08:01.941 "bdev_error_create", 00:08:01.941 "bdev_split_delete", 00:08:01.941 "bdev_split_create", 00:08:01.941 "bdev_delay_delete", 00:08:01.941 "bdev_delay_create", 00:08:01.941 "bdev_delay_update_latency", 00:08:01.941 "bdev_zone_block_delete", 00:08:01.941 "bdev_zone_block_create", 00:08:01.941 "blobfs_create", 00:08:01.941 "blobfs_detect", 00:08:01.941 "blobfs_set_cache_size", 00:08:01.941 "bdev_crypto_delete", 00:08:01.941 "bdev_crypto_create", 00:08:01.941 "bdev_compress_delete", 00:08:01.941 "bdev_compress_create", 00:08:01.941 "bdev_compress_get_orphans", 00:08:01.941 "bdev_aio_delete", 00:08:01.941 "bdev_aio_rescan", 00:08:01.941 "bdev_aio_create", 00:08:01.941 "bdev_ftl_set_property", 00:08:01.941 "bdev_ftl_get_properties", 00:08:01.941 "bdev_ftl_get_stats", 00:08:01.941 "bdev_ftl_unmap", 00:08:01.941 "bdev_ftl_unload", 00:08:01.941 "bdev_ftl_delete", 00:08:01.941 "bdev_ftl_load", 00:08:01.941 "bdev_ftl_create", 00:08:01.941 "bdev_virtio_attach_controller", 00:08:01.941 "bdev_virtio_scsi_get_devices", 00:08:01.941 "bdev_virtio_detach_controller", 00:08:01.941 "bdev_virtio_blk_set_hotplug", 00:08:01.941 "bdev_iscsi_delete", 00:08:01.941 "bdev_iscsi_create", 00:08:01.941 "bdev_iscsi_set_options", 00:08:01.941 "accel_error_inject_error", 00:08:01.941 "ioat_scan_accel_module", 00:08:01.941 "dsa_scan_accel_module", 00:08:01.941 "iaa_scan_accel_module", 00:08:01.941 "dpdk_cryptodev_get_driver", 00:08:01.941 "dpdk_cryptodev_set_driver", 00:08:01.941 "dpdk_cryptodev_scan_accel_module", 00:08:01.941 "compressdev_scan_accel_module", 00:08:01.941 "keyring_file_remove_key", 00:08:01.941 "keyring_file_add_key", 00:08:01.941 "keyring_linux_set_options", 00:08:01.941 "iscsi_get_histogram", 00:08:01.941 "iscsi_enable_histogram", 00:08:01.941 "iscsi_set_options", 00:08:01.941 "iscsi_get_auth_groups", 00:08:01.941 "iscsi_auth_group_remove_secret", 00:08:01.941 "iscsi_auth_group_add_secret", 00:08:01.941 "iscsi_delete_auth_group", 00:08:01.941 "iscsi_create_auth_group", 00:08:01.941 "iscsi_set_discovery_auth", 00:08:01.941 "iscsi_get_options", 00:08:01.941 "iscsi_target_node_request_logout", 00:08:01.941 "iscsi_target_node_set_redirect", 00:08:01.941 "iscsi_target_node_set_auth", 00:08:01.941 "iscsi_target_node_add_lun", 00:08:01.941 "iscsi_get_stats", 00:08:01.941 "iscsi_get_connections", 00:08:01.941 "iscsi_portal_group_set_auth", 00:08:01.941 "iscsi_start_portal_group", 00:08:01.941 "iscsi_delete_portal_group", 00:08:01.941 "iscsi_create_portal_group", 00:08:01.941 "iscsi_get_portal_groups", 00:08:01.941 "iscsi_delete_target_node", 00:08:01.941 "iscsi_target_node_remove_pg_ig_maps", 00:08:01.941 "iscsi_target_node_add_pg_ig_maps", 00:08:01.941 "iscsi_create_target_node", 00:08:01.941 "iscsi_get_target_nodes", 00:08:01.941 "iscsi_delete_initiator_group", 00:08:01.941 "iscsi_initiator_group_remove_initiators", 00:08:01.941 "iscsi_initiator_group_add_initiators", 00:08:01.941 "iscsi_create_initiator_group", 00:08:01.941 "iscsi_get_initiator_groups", 00:08:01.941 "nvmf_set_crdt", 00:08:01.941 "nvmf_set_config", 00:08:01.941 "nvmf_set_max_subsystems", 00:08:01.941 "nvmf_stop_mdns_prr", 00:08:01.941 "nvmf_publish_mdns_prr", 00:08:01.941 "nvmf_subsystem_get_listeners", 00:08:01.941 "nvmf_subsystem_get_qpairs", 00:08:01.941 "nvmf_subsystem_get_controllers", 00:08:01.941 "nvmf_get_stats", 00:08:01.941 "nvmf_get_transports", 00:08:01.941 "nvmf_create_transport", 00:08:01.941 "nvmf_get_targets", 00:08:01.941 "nvmf_delete_target", 00:08:01.941 "nvmf_create_target", 00:08:01.941 "nvmf_subsystem_allow_any_host", 00:08:01.941 "nvmf_subsystem_remove_host", 00:08:01.941 "nvmf_subsystem_add_host", 00:08:01.941 "nvmf_ns_remove_host", 00:08:01.941 "nvmf_ns_add_host", 00:08:01.941 "nvmf_subsystem_remove_ns", 00:08:01.941 "nvmf_subsystem_add_ns", 00:08:01.941 "nvmf_subsystem_listener_set_ana_state", 00:08:01.941 "nvmf_discovery_get_referrals", 00:08:01.941 "nvmf_discovery_remove_referral", 00:08:01.941 "nvmf_discovery_add_referral", 00:08:01.941 "nvmf_subsystem_remove_listener", 00:08:01.941 "nvmf_subsystem_add_listener", 00:08:01.941 "nvmf_delete_subsystem", 00:08:01.941 "nvmf_create_subsystem", 00:08:01.941 "nvmf_get_subsystems", 00:08:01.941 "env_dpdk_get_mem_stats", 00:08:01.941 "nbd_get_disks", 00:08:01.941 "nbd_stop_disk", 00:08:01.941 "nbd_start_disk", 00:08:01.941 "ublk_recover_disk", 00:08:01.941 "ublk_get_disks", 00:08:01.941 "ublk_stop_disk", 00:08:01.941 "ublk_start_disk", 00:08:01.941 "ublk_destroy_target", 00:08:01.941 "ublk_create_target", 00:08:01.941 "virtio_blk_create_transport", 00:08:01.941 "virtio_blk_get_transports", 00:08:01.941 "vhost_controller_set_coalescing", 00:08:01.941 "vhost_get_controllers", 00:08:01.941 "vhost_delete_controller", 00:08:01.941 "vhost_create_blk_controller", 00:08:01.941 "vhost_scsi_controller_remove_target", 00:08:01.941 "vhost_scsi_controller_add_target", 00:08:01.941 "vhost_start_scsi_controller", 00:08:01.941 "vhost_create_scsi_controller", 00:08:01.941 "thread_set_cpumask", 00:08:01.941 "framework_get_governor", 00:08:01.941 "framework_get_scheduler", 00:08:01.941 "framework_set_scheduler", 00:08:01.941 "framework_get_reactors", 00:08:01.941 "thread_get_io_channels", 00:08:01.941 "thread_get_pollers", 00:08:01.941 "thread_get_stats", 00:08:01.941 "framework_monitor_context_switch", 00:08:01.941 "spdk_kill_instance", 00:08:01.941 "log_enable_timestamps", 00:08:01.941 "log_get_flags", 00:08:01.941 "log_clear_flag", 00:08:01.941 "log_set_flag", 00:08:01.941 "log_get_level", 00:08:01.941 "log_set_level", 00:08:01.941 "log_get_print_level", 00:08:01.941 "log_set_print_level", 00:08:01.941 "framework_enable_cpumask_locks", 00:08:01.941 "framework_disable_cpumask_locks", 00:08:01.941 "framework_wait_init", 00:08:01.941 "framework_start_init", 00:08:01.941 "scsi_get_devices", 00:08:01.942 "bdev_get_histogram", 00:08:01.942 "bdev_enable_histogram", 00:08:01.942 "bdev_set_qos_limit", 00:08:01.942 "bdev_set_qd_sampling_period", 00:08:01.942 "bdev_get_bdevs", 00:08:01.942 "bdev_reset_iostat", 00:08:01.942 "bdev_get_iostat", 00:08:01.942 "bdev_examine", 00:08:01.942 "bdev_wait_for_examine", 00:08:01.942 "bdev_set_options", 00:08:01.942 "notify_get_notifications", 00:08:01.942 "notify_get_types", 00:08:01.942 "accel_get_stats", 00:08:01.942 "accel_set_options", 00:08:01.942 "accel_set_driver", 00:08:01.942 "accel_crypto_key_destroy", 00:08:01.942 "accel_crypto_keys_get", 00:08:01.942 "accel_crypto_key_create", 00:08:01.942 "accel_assign_opc", 00:08:01.942 "accel_get_module_info", 00:08:01.942 "accel_get_opc_assignments", 00:08:01.942 "vmd_rescan", 00:08:01.942 "vmd_remove_device", 00:08:01.942 "vmd_enable", 00:08:01.942 "sock_get_default_impl", 00:08:01.942 "sock_set_default_impl", 00:08:01.942 "sock_impl_set_options", 00:08:01.942 "sock_impl_get_options", 00:08:01.942 "iobuf_get_stats", 00:08:01.942 "iobuf_set_options", 00:08:01.942 "framework_get_pci_devices", 00:08:01.942 "framework_get_config", 00:08:01.942 "framework_get_subsystems", 00:08:01.942 "trace_get_info", 00:08:01.942 "trace_get_tpoint_group_mask", 00:08:01.942 "trace_disable_tpoint_group", 00:08:01.942 "trace_enable_tpoint_group", 00:08:01.942 "trace_clear_tpoint_mask", 00:08:01.942 "trace_set_tpoint_mask", 00:08:01.942 "keyring_get_keys", 00:08:01.942 "spdk_get_version", 00:08:01.942 "rpc_get_methods" 00:08:01.942 ] 00:08:01.942 10:50:09 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:01.942 10:50:09 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:01.942 10:50:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.942 10:50:09 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:01.942 10:50:09 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3481421 00:08:01.942 10:50:09 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3481421 ']' 00:08:01.942 10:50:09 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3481421 00:08:01.942 10:50:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:08:01.942 10:50:09 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.942 10:50:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3481421 00:08:02.201 10:50:09 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.201 10:50:09 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.201 10:50:09 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3481421' 00:08:02.201 killing process with pid 3481421 00:08:02.201 10:50:09 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3481421 00:08:02.201 10:50:09 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3481421 00:08:05.490 00:08:05.490 real 0m5.577s 00:08:05.490 user 0m9.784s 00:08:05.490 sys 0m0.741s 00:08:05.490 10:50:12 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.490 10:50:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:05.490 ************************************ 00:08:05.490 END TEST spdkcli_tcp 00:08:05.490 ************************************ 00:08:05.490 10:50:12 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:05.490 10:50:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.490 10:50:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.490 10:50:12 -- common/autotest_common.sh@10 -- # set +x 00:08:05.490 ************************************ 00:08:05.490 START TEST dpdk_mem_utility 00:08:05.490 ************************************ 00:08:05.490 10:50:12 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:05.490 * Looking for test storage... 00:08:05.490 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:08:05.490 10:50:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:05.490 10:50:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3482529 00:08:05.490 10:50:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3482529 00:08:05.490 10:50:12 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3482529 ']' 00:08:05.490 10:50:12 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.490 10:50:12 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.490 10:50:12 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.490 10:50:12 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.490 10:50:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:05.490 10:50:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:08:05.750 [2024-07-25 10:50:12.710273] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:05.750 [2024-07-25 10:50:12.710396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3482529 ] 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:05.750 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:05.750 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:06.009 [2024-07-25 10:50:12.935567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.267 [2024-07-25 10:50:13.201748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.649 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.649 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:08:07.649 10:50:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:07.649 10:50:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:07.649 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.649 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:07.649 { 00:08:07.649 "filename": "/tmp/spdk_mem_dump.txt" 00:08:07.649 } 00:08:07.649 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.649 10:50:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:07.649 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:07.649 1 heaps totaling size 820.000000 MiB 00:08:07.649 size: 820.000000 MiB heap id: 0 00:08:07.649 end heaps---------- 00:08:07.649 8 mempools totaling size 598.116089 MiB 00:08:07.649 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:07.649 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:07.649 size: 84.521057 MiB name: bdev_io_3482529 00:08:07.649 size: 51.011292 MiB name: evtpool_3482529 00:08:07.649 size: 50.003479 MiB name: msgpool_3482529 00:08:07.649 size: 21.763794 MiB name: PDU_Pool 00:08:07.649 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:07.649 size: 0.026123 MiB name: Session_Pool 00:08:07.649 end mempools------- 00:08:07.649 201 memzones totaling size 4.176453 MiB 00:08:07.649 size: 1.000366 MiB name: RG_ring_0_3482529 00:08:07.649 size: 1.000366 MiB name: RG_ring_1_3482529 00:08:07.649 size: 1.000366 MiB name: RG_ring_4_3482529 00:08:07.649 size: 1.000366 MiB name: RG_ring_5_3482529 00:08:07.649 size: 0.125366 MiB name: RG_ring_2_3482529 00:08:07.649 size: 0.015991 MiB name: RG_ring_3_3482529 00:08:07.649 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:08:07.649 size: 0.000305 MiB name: 0000:1a:01.0_qat 00:08:07.649 size: 0.000305 MiB name: 0000:1a:01.1_qat 00:08:07.649 size: 0.000305 MiB name: 0000:1a:01.2_qat 00:08:07.649 size: 0.000305 MiB name: 0000:1a:01.3_qat 00:08:07.649 size: 0.000305 MiB name: 0000:1a:01.4_qat 00:08:07.649 size: 0.000305 MiB name: 0000:1a:01.5_qat 00:08:07.649 size: 0.000305 MiB name: 0000:1a:01.6_qat 00:08:07.649 size: 0.000305 MiB name: 0000:1a:01.7_qat 00:08:07.649 size: 0.000305 MiB name: 0000:1a:02.0_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1a:02.1_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1a:02.2_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1a:02.3_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1a:02.4_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1a:02.5_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1a:02.6_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1a:02.7_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:01.0_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:01.1_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:01.2_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:01.3_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:01.4_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:01.5_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:01.6_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:01.7_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:02.0_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:02.1_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:02.2_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:02.3_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:02.4_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:02.5_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:02.6_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1c:02.7_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:01.0_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:01.1_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:01.2_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:01.3_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:01.4_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:01.5_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:01.6_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:01.7_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:02.0_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:02.1_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:02.2_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:02.3_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:02.4_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:02.5_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:02.6_qat 00:08:07.650 size: 0.000305 MiB name: 0000:1e:02.7_qat 00:08:07.650 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_0 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_1 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_0 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_2 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_3 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_1 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_4 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_5 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_2 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_6 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_7 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_3 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_8 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_9 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_4 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_10 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_11 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_5 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_12 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_13 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_6 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_14 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_15 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_7 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_16 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_17 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_8 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_18 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_19 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_9 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_20 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_21 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_10 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_22 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_23 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_11 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_24 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_25 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_12 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_26 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_27 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_13 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_28 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_29 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_14 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_30 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_31 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_15 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_32 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_33 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_16 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_34 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_35 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_17 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_36 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_37 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_18 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_38 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_39 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_19 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_40 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_41 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_20 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_42 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_43 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_21 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_44 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_45 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_22 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_46 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_47 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_23 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_48 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_49 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_24 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_50 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_51 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_25 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_52 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_53 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_26 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_54 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_55 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_27 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_56 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_57 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_28 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_58 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_59 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_29 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_60 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_61 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_30 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_62 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_63 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_31 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_64 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_65 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_32 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_66 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_67 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_33 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_68 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_69 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_34 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_70 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_71 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_35 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_72 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_73 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_36 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_74 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_75 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_37 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_76 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_77 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_38 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_78 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_79 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_39 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_80 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_81 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_40 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_82 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_83 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_41 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_84 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_85 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_42 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_86 00:08:07.650 size: 0.000122 MiB name: rte_cryptodev_data_87 00:08:07.650 size: 0.000122 MiB name: rte_compressdev_data_43 00:08:07.651 size: 0.000122 MiB name: rte_cryptodev_data_88 00:08:07.651 size: 0.000122 MiB name: rte_cryptodev_data_89 00:08:07.651 size: 0.000122 MiB name: rte_compressdev_data_44 00:08:07.651 size: 0.000122 MiB name: rte_cryptodev_data_90 00:08:07.651 size: 0.000122 MiB name: rte_cryptodev_data_91 00:08:07.651 size: 0.000122 MiB name: rte_compressdev_data_45 00:08:07.651 size: 0.000122 MiB name: rte_cryptodev_data_92 00:08:07.651 size: 0.000122 MiB name: rte_cryptodev_data_93 00:08:07.651 size: 0.000122 MiB name: rte_compressdev_data_46 00:08:07.651 size: 0.000122 MiB name: rte_cryptodev_data_94 00:08:07.651 size: 0.000122 MiB name: rte_cryptodev_data_95 00:08:07.651 size: 0.000122 MiB name: rte_compressdev_data_47 00:08:07.651 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:08:07.651 end memzones------- 00:08:07.651 10:50:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:07.651 heap id: 0 total size: 820.000000 MiB number of busy elements: 623 number of free elements: 17 00:08:07.651 list of free elements. size: 17.732056 MiB 00:08:07.651 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:07.651 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:07.651 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:07.651 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:07.651 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:07.651 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:07.651 element at address: 0x200019900040 with size: 0.999939 MiB 00:08:07.651 element at address: 0x200019600000 with size: 0.999329 MiB 00:08:07.651 element at address: 0x200003e00000 with size: 0.996338 MiB 00:08:07.651 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:07.651 element at address: 0x200018e00000 with size: 0.959900 MiB 00:08:07.651 element at address: 0x20001b000000 with size: 0.583191 MiB 00:08:07.651 element at address: 0x200019200000 with size: 0.491150 MiB 00:08:07.651 element at address: 0x200019a00000 with size: 0.485657 MiB 00:08:07.651 element at address: 0x200013800000 with size: 0.467651 MiB 00:08:07.651 element at address: 0x200028400000 with size: 0.393616 MiB 00:08:07.651 element at address: 0x200003a00000 with size: 0.372803 MiB 00:08:07.651 list of standard malloc elements. size: 199.934204 MiB 00:08:07.651 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:07.651 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:07.651 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:07.651 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:07.651 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:07.651 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:07.651 element at address: 0x200000207380 with size: 0.062683 MiB 00:08:07.651 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:07.651 element at address: 0x2000003239c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000327740 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000032b4c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000032f240 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000332fc0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000336d40 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000033aac0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000033e840 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003425c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000346340 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000034a0c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000034de40 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000351bc0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000355940 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003596c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000035d440 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003611c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000364f40 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000368cc0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000036ca40 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003707c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000374540 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003782c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000037c040 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000037fdc0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000383b40 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003878c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000038b640 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000038f3c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000393140 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000396ec0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000039ac40 with size: 0.004456 MiB 00:08:07.651 element at address: 0x20000039e9c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003a2740 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003a64c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003aa240 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003adfc0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003b1d40 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003b5ac0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003b9840 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003bd5c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003c1340 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003c50c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003c8e40 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003ccbc0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003d0940 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003d46c0 with size: 0.004456 MiB 00:08:07.651 element at address: 0x2000003d8c40 with size: 0.004456 MiB 00:08:07.651 element at address: 0x200000321840 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000322900 with size: 0.004089 MiB 00:08:07.651 element at address: 0x2000003255c0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000326680 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000329340 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000032a400 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000032d0c0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000032e180 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000330e40 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000331f00 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000334bc0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000335c80 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000338940 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000339a00 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000033c6c0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000033d780 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000340440 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000341500 with size: 0.004089 MiB 00:08:07.651 element at address: 0x2000003441c0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000345280 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000347f40 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000349000 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000034bcc0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000034cd80 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000034fa40 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000350b00 with size: 0.004089 MiB 00:08:07.651 element at address: 0x2000003537c0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000354880 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000357540 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000358600 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000035b2c0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000035c380 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000035f040 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000360100 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000362dc0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000363e80 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000366b40 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000367c00 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000036a8c0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000036b980 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000036e640 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000036f700 with size: 0.004089 MiB 00:08:07.651 element at address: 0x2000003723c0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000373480 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000376140 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000377200 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000379ec0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000037af80 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000037dc40 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000037ed00 with size: 0.004089 MiB 00:08:07.651 element at address: 0x2000003819c0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000382a80 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000385740 with size: 0.004089 MiB 00:08:07.651 element at address: 0x200000386800 with size: 0.004089 MiB 00:08:07.651 element at address: 0x2000003894c0 with size: 0.004089 MiB 00:08:07.651 element at address: 0x20000038a580 with size: 0.004089 MiB 00:08:07.652 element at address: 0x20000038d240 with size: 0.004089 MiB 00:08:07.652 element at address: 0x20000038e300 with size: 0.004089 MiB 00:08:07.652 element at address: 0x200000390fc0 with size: 0.004089 MiB 00:08:07.652 element at address: 0x200000392080 with size: 0.004089 MiB 00:08:07.652 element at address: 0x200000394d40 with size: 0.004089 MiB 00:08:07.652 element at address: 0x200000395e00 with size: 0.004089 MiB 00:08:07.652 element at address: 0x200000398ac0 with size: 0.004089 MiB 00:08:07.652 element at address: 0x200000399b80 with size: 0.004089 MiB 00:08:07.652 element at address: 0x20000039c840 with size: 0.004089 MiB 00:08:07.652 element at address: 0x20000039d900 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003a05c0 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003a1680 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003a4340 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003a5400 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003a80c0 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003a9180 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003abe40 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003acf00 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003afbc0 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003b0c80 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003b3940 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003b4a00 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003b76c0 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003b8780 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003bb440 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003bc500 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003bf1c0 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003c0280 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003c2f40 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003c4000 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003c6cc0 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003c7d80 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003caa40 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003cbb00 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003ce7c0 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003cf880 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003d2540 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003d3600 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003d6ac0 with size: 0.004089 MiB 00:08:07.652 element at address: 0x2000003d7b80 with size: 0.004089 MiB 00:08:07.652 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:08:07.652 element at address: 0x200000207200 with size: 0.000366 MiB 00:08:07.652 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:07.652 element at address: 0x200000200000 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200100 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200200 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200300 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200400 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200500 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200600 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200700 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200800 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200900 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200a00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200b00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200c00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200d00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200e00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000200f00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201000 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201100 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201200 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201300 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201400 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201500 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201600 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201700 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201800 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201900 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201a00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201b00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201c00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201d00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201e00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000201f00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202000 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202100 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202200 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202300 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202400 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202500 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202600 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202700 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202800 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202900 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202a00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202b00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202c00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202d00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202e00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000202f00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203000 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203100 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203200 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203300 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203400 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203500 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203600 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203700 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203800 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203900 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203a00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203b00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203c00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203d00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203e00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000203f00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204000 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204100 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204200 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204300 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204400 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204500 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204600 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204700 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204800 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204900 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204a00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204b00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204c00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204d00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204e00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000204f00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205000 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205100 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205200 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205300 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205400 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205500 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205600 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205700 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205800 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205900 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205a00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205b00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205c00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205d00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205e00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000205f00 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000206000 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000206100 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000206200 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000206300 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000206400 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000206500 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000206600 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000206700 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000206800 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000206900 with size: 0.000244 MiB 00:08:07.652 element at address: 0x200000206a00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000206b00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000206c00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000206d00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000206e00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000206f00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000207000 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000207100 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000217440 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000217540 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000217640 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000217740 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000217840 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000217940 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000217a40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000217b40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000217c40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000217d40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000217e40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000217f40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000218040 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000218140 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000218240 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021c580 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021c680 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021c780 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021c880 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021c980 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021ca80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021cb80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021cc80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021cd80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021ce80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021cf80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021d080 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021d180 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021d280 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021d380 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021d480 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021d580 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021d680 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021d780 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021da00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021db00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021dc00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021dd00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021de00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021df00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021e000 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021e100 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021e200 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021e300 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021e400 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021e500 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021e600 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021e700 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021e800 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021e900 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021ea00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000021eb00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000320d80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000320e80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x2000003210c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x2000003211c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000321400 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000324c00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000324e40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000324f40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000325180 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000328980 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000328bc0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000328cc0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000328f00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000032c700 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000032c940 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000032ca40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000032cc80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000330480 with size: 0.000244 MiB 00:08:07.653 element at address: 0x2000003306c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x2000003307c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000330a00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000334200 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000334440 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000334540 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000334780 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000337f80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x2000003381c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x2000003382c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000338500 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000033bd00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000033bf40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000033c040 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000033c280 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000033fa80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000033fcc0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000033fdc0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000340000 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000343800 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000343a40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000343b40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000343d80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000347580 with size: 0.000244 MiB 00:08:07.653 element at address: 0x2000003477c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x2000003478c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000347b00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000034b300 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000034b540 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000034b640 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000034b880 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000034f080 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000034f2c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000034f3c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000034f600 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000352e00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000353040 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000353140 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000353380 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000356b80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000356dc0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000356ec0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000357100 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000035a900 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000035ab40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000035ac40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000035ae80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000035e680 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000035e8c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000035e9c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000035ec00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000362400 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000362640 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000362740 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000362980 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000366180 with size: 0.000244 MiB 00:08:07.653 element at address: 0x2000003663c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x2000003664c0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000366700 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000369f00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000036a140 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000036a240 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000036a480 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000036dc80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000036dec0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000036dfc0 with size: 0.000244 MiB 00:08:07.653 element at address: 0x20000036e200 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000371a00 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000371c40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000371d40 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000371f80 with size: 0.000244 MiB 00:08:07.653 element at address: 0x200000375780 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003759c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000375ac0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000375d00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000379500 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000379740 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000379840 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000379a80 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000037d280 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000037d4c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000037d5c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000037d800 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000381000 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000381240 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000381340 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000381580 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000384d80 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000384fc0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003850c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000385300 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000388b00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000388d40 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000388e40 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000389080 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000038c880 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000038cac0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000038cbc0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000038ce00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000390600 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000390840 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000390940 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000390b80 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000394380 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003945c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003946c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000394900 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000398100 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000398340 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000398440 with size: 0.000244 MiB 00:08:07.654 element at address: 0x200000398680 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000039be80 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000039c0c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000039c1c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000039c400 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000039fc00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000039fe40 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000039ff40 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003a0180 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003a3980 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003a3bc0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003a3cc0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003a3f00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003a7700 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003a7940 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003a7a40 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003a7c80 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003ab480 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003ab6c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003ab7c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003aba00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003af200 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003af440 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003af540 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003af780 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003b2f80 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003b31c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003b32c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003b3500 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003b6d00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003b6f40 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003b7040 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003b7280 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003baa80 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003bacc0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003badc0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003bb000 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003be800 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003bea40 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003beb40 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003bed80 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003c2580 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003c27c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003c28c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003c2b00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003c6300 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003c6540 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003c6640 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003c6880 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003ca080 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003ca2c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003ca3c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003ca600 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003cde00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003ce040 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003ce140 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003ce380 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003d1b80 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003d1dc0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003d1ec0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003d2100 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003d5a00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003d61c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003d62c0 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000003d6680 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:07.654 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:07.655 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200013877b80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200013877c80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200013877d80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200013877e80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200013877f80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:07.655 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200028464c40 with size: 0.000244 MiB 00:08:07.655 element at address: 0x200028464d40 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846ba00 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846be80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846c080 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846c180 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846c280 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846c380 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846c480 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846c580 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846c680 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846c780 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846c880 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846c980 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846d080 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846d180 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846d280 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846d380 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846d480 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846d580 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:07.655 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:07.655 list of memzone associated elements. size: 602.333740 MiB 00:08:07.655 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:07.655 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:07.655 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:07.655 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:07.655 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:07.655 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3482529_0 00:08:07.655 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:07.655 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3482529_0 00:08:07.655 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:07.655 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3482529_0 00:08:07.655 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:07.655 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:07.655 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:07.655 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:07.655 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:07.655 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3482529 00:08:07.655 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:07.655 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3482529 00:08:07.655 element at address: 0x20000021ec00 with size: 1.008179 MiB 00:08:07.655 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3482529 00:08:07.655 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:07.655 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:07.655 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:07.655 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:07.655 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:07.655 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:07.655 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:07.655 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:07.655 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:07.655 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3482529 00:08:07.655 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:07.655 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3482529 00:08:07.655 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:07.655 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3482529 00:08:07.655 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:07.655 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3482529 00:08:07.655 element at address: 0x200003a5f700 with size: 0.500549 MiB 00:08:07.655 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3482529 00:08:07.655 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:08:07.655 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:07.655 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:07.655 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:07.655 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:08:07.655 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:07.655 element at address: 0x200003adf940 with size: 0.125549 MiB 00:08:07.655 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3482529 00:08:07.655 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:08:07.655 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:07.655 element at address: 0x200028464e40 with size: 0.023804 MiB 00:08:07.655 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:07.655 element at address: 0x200000218340 with size: 0.016174 MiB 00:08:07.655 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3482529 00:08:07.655 element at address: 0x20002846afc0 with size: 0.002502 MiB 00:08:07.655 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:07.656 element at address: 0x2000003d5c40 with size: 0.001343 MiB 00:08:07.656 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:08:07.656 element at address: 0x2000003d68c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.0_qat 00:08:07.656 element at address: 0x2000003d2340 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.1_qat 00:08:07.656 element at address: 0x2000003ce5c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.2_qat 00:08:07.656 element at address: 0x2000003ca840 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.3_qat 00:08:07.656 element at address: 0x2000003c6ac0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.4_qat 00:08:07.656 element at address: 0x2000003c2d40 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.5_qat 00:08:07.656 element at address: 0x2000003befc0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.6_qat 00:08:07.656 element at address: 0x2000003bb240 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:01.7_qat 00:08:07.656 element at address: 0x2000003b74c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.0_qat 00:08:07.656 element at address: 0x2000003b3740 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.1_qat 00:08:07.656 element at address: 0x2000003af9c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.2_qat 00:08:07.656 element at address: 0x2000003abc40 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.3_qat 00:08:07.656 element at address: 0x2000003a7ec0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.4_qat 00:08:07.656 element at address: 0x2000003a4140 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.5_qat 00:08:07.656 element at address: 0x2000003a03c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.6_qat 00:08:07.656 element at address: 0x20000039c640 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1a:02.7_qat 00:08:07.656 element at address: 0x2000003988c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.0_qat 00:08:07.656 element at address: 0x200000394b40 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.1_qat 00:08:07.656 element at address: 0x200000390dc0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.2_qat 00:08:07.656 element at address: 0x20000038d040 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.3_qat 00:08:07.656 element at address: 0x2000003892c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.4_qat 00:08:07.656 element at address: 0x200000385540 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.5_qat 00:08:07.656 element at address: 0x2000003817c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.6_qat 00:08:07.656 element at address: 0x20000037da40 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:01.7_qat 00:08:07.656 element at address: 0x200000379cc0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.0_qat 00:08:07.656 element at address: 0x200000375f40 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.1_qat 00:08:07.656 element at address: 0x2000003721c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.2_qat 00:08:07.656 element at address: 0x20000036e440 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.3_qat 00:08:07.656 element at address: 0x20000036a6c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.4_qat 00:08:07.656 element at address: 0x200000366940 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.5_qat 00:08:07.656 element at address: 0x200000362bc0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.6_qat 00:08:07.656 element at address: 0x20000035ee40 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1c:02.7_qat 00:08:07.656 element at address: 0x20000035b0c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.0_qat 00:08:07.656 element at address: 0x200000357340 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.1_qat 00:08:07.656 element at address: 0x2000003535c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.2_qat 00:08:07.656 element at address: 0x20000034f840 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.3_qat 00:08:07.656 element at address: 0x20000034bac0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.4_qat 00:08:07.656 element at address: 0x200000347d40 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.5_qat 00:08:07.656 element at address: 0x200000343fc0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.6_qat 00:08:07.656 element at address: 0x200000340240 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:01.7_qat 00:08:07.656 element at address: 0x20000033c4c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.0_qat 00:08:07.656 element at address: 0x200000338740 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.1_qat 00:08:07.656 element at address: 0x2000003349c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.2_qat 00:08:07.656 element at address: 0x200000330c40 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.3_qat 00:08:07.656 element at address: 0x20000032cec0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.4_qat 00:08:07.656 element at address: 0x200000329140 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.5_qat 00:08:07.656 element at address: 0x2000003253c0 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.6_qat 00:08:07.656 element at address: 0x200000321640 with size: 0.000488 MiB 00:08:07.656 associated memzone info: size: 0.000305 MiB name: 0000:1e:02.7_qat 00:08:07.656 element at address: 0x2000003d6500 with size: 0.000366 MiB 00:08:07.656 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:08:07.656 element at address: 0x20000021d880 with size: 0.000366 MiB 00:08:07.656 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3482529 00:08:07.656 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:07.656 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3482529 00:08:07.656 element at address: 0x20002846bb00 with size: 0.000366 MiB 00:08:07.656 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:07.656 element at address: 0x2000003d6780 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:08:07.656 element at address: 0x2000003d63c0 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:08:07.656 element at address: 0x2000003d5b00 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:08:07.656 element at address: 0x2000003d2200 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:08:07.656 element at address: 0x2000003d1fc0 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:08:07.656 element at address: 0x2000003d1c80 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:08:07.656 element at address: 0x2000003ce480 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:08:07.656 element at address: 0x2000003ce240 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:08:07.656 element at address: 0x2000003cdf00 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:08:07.656 element at address: 0x2000003ca700 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:08:07.656 element at address: 0x2000003ca4c0 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:08:07.656 element at address: 0x2000003ca180 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:08:07.656 element at address: 0x2000003c6980 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:08:07.656 element at address: 0x2000003c6740 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:08:07.656 element at address: 0x2000003c6400 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:08:07.656 element at address: 0x2000003c2c00 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:08:07.656 element at address: 0x2000003c29c0 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:08:07.656 element at address: 0x2000003c2680 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:08:07.656 element at address: 0x2000003bee80 with size: 0.000305 MiB 00:08:07.656 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:08:07.656 element at address: 0x2000003bec40 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:08:07.657 element at address: 0x2000003be900 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:08:07.657 element at address: 0x2000003bb100 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:08:07.657 element at address: 0x2000003baec0 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:08:07.657 element at address: 0x2000003bab80 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:08:07.657 element at address: 0x2000003b7380 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:08:07.657 element at address: 0x2000003b7140 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:08:07.657 element at address: 0x2000003b6e00 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:08:07.657 element at address: 0x2000003b3600 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:08:07.657 element at address: 0x2000003b33c0 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:08:07.657 element at address: 0x2000003b3080 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:08:07.657 element at address: 0x2000003af880 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:08:07.657 element at address: 0x2000003af640 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:08:07.657 element at address: 0x2000003af300 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:08:07.657 element at address: 0x2000003abb00 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:08:07.657 element at address: 0x2000003ab8c0 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:08:07.657 element at address: 0x2000003ab580 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:08:07.657 element at address: 0x2000003a7d80 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:08:07.657 element at address: 0x2000003a7b40 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:08:07.657 element at address: 0x2000003a7800 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:08:07.657 element at address: 0x2000003a4000 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:08:07.657 element at address: 0x2000003a3dc0 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:08:07.657 element at address: 0x2000003a3a80 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:08:07.657 element at address: 0x2000003a0280 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:08:07.657 element at address: 0x2000003a0040 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:08:07.657 element at address: 0x20000039fd00 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:08:07.657 element at address: 0x20000039c500 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:08:07.657 element at address: 0x20000039c2c0 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:08:07.657 element at address: 0x20000039bf80 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:08:07.657 element at address: 0x200000398780 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:08:07.657 element at address: 0x200000398540 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:08:07.657 element at address: 0x200000398200 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:08:07.657 element at address: 0x200000394a00 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:08:07.657 element at address: 0x2000003947c0 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:08:07.657 element at address: 0x200000394480 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:08:07.657 element at address: 0x200000390c80 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:08:07.657 element at address: 0x200000390a40 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:08:07.657 element at address: 0x200000390700 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:08:07.657 element at address: 0x20000038cf00 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:08:07.657 element at address: 0x20000038ccc0 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:08:07.657 element at address: 0x20000038c980 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:08:07.657 element at address: 0x200000389180 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:08:07.657 element at address: 0x200000388f40 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:08:07.657 element at address: 0x200000388c00 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:08:07.657 element at address: 0x200000385400 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:08:07.657 element at address: 0x2000003851c0 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:08:07.657 element at address: 0x200000384e80 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:08:07.657 element at address: 0x200000381680 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:08:07.657 element at address: 0x200000381440 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:08:07.657 element at address: 0x200000381100 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:08:07.657 element at address: 0x20000037d900 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:08:07.657 element at address: 0x20000037d6c0 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:08:07.657 element at address: 0x20000037d380 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:08:07.657 element at address: 0x200000379b80 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:08:07.657 element at address: 0x200000379940 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:08:07.657 element at address: 0x200000379600 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:08:07.657 element at address: 0x200000375e00 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:08:07.657 element at address: 0x200000375bc0 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:08:07.657 element at address: 0x200000375880 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:08:07.657 element at address: 0x200000372080 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:08:07.657 element at address: 0x200000371e40 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:08:07.657 element at address: 0x200000371b00 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:08:07.657 element at address: 0x20000036e300 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:08:07.657 element at address: 0x20000036e0c0 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:08:07.657 element at address: 0x20000036dd80 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:08:07.657 element at address: 0x20000036a580 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:08:07.657 element at address: 0x20000036a340 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:08:07.657 element at address: 0x20000036a000 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:08:07.657 element at address: 0x200000366800 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:08:07.657 element at address: 0x2000003665c0 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:08:07.657 element at address: 0x200000366280 with size: 0.000305 MiB 00:08:07.657 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:08:07.657 element at address: 0x200000362a80 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:08:07.658 element at address: 0x200000362840 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:08:07.658 element at address: 0x200000362500 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:08:07.658 element at address: 0x20000035ed00 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:08:07.658 element at address: 0x20000035eac0 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:08:07.658 element at address: 0x20000035e780 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:08:07.658 element at address: 0x20000035af80 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_64 00:08:07.658 element at address: 0x20000035ad40 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_65 00:08:07.658 element at address: 0x20000035aa00 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_32 00:08:07.658 element at address: 0x200000357200 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_66 00:08:07.658 element at address: 0x200000356fc0 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_67 00:08:07.658 element at address: 0x200000356c80 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_33 00:08:07.658 element at address: 0x200000353480 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_68 00:08:07.658 element at address: 0x200000353240 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_69 00:08:07.658 element at address: 0x200000352f00 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_34 00:08:07.658 element at address: 0x20000034f700 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_70 00:08:07.658 element at address: 0x20000034f4c0 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_71 00:08:07.658 element at address: 0x20000034f180 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_35 00:08:07.658 element at address: 0x20000034b980 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_72 00:08:07.658 element at address: 0x20000034b740 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_73 00:08:07.658 element at address: 0x20000034b400 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_36 00:08:07.658 element at address: 0x200000347c00 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_74 00:08:07.658 element at address: 0x2000003479c0 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_75 00:08:07.658 element at address: 0x200000347680 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_37 00:08:07.658 element at address: 0x200000343e80 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_76 00:08:07.658 element at address: 0x200000343c40 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_77 00:08:07.658 element at address: 0x200000343900 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_38 00:08:07.658 element at address: 0x200000340100 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_78 00:08:07.658 element at address: 0x20000033fec0 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_79 00:08:07.658 element at address: 0x20000033fb80 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_39 00:08:07.658 element at address: 0x20000033c380 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_80 00:08:07.658 element at address: 0x20000033c140 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_81 00:08:07.658 element at address: 0x20000033be00 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_40 00:08:07.658 element at address: 0x200000338600 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_82 00:08:07.658 element at address: 0x2000003383c0 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_83 00:08:07.658 element at address: 0x200000338080 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_41 00:08:07.658 element at address: 0x200000334880 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_84 00:08:07.658 element at address: 0x200000334640 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_85 00:08:07.658 element at address: 0x200000334300 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_42 00:08:07.658 element at address: 0x200000330b00 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_86 00:08:07.658 element at address: 0x2000003308c0 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_87 00:08:07.658 element at address: 0x200000330580 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_43 00:08:07.658 element at address: 0x20000032cd80 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_88 00:08:07.658 element at address: 0x20000032cb40 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_89 00:08:07.658 element at address: 0x20000032c800 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_44 00:08:07.658 element at address: 0x200000329000 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_90 00:08:07.658 element at address: 0x200000328dc0 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_91 00:08:07.658 element at address: 0x200000328a80 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_45 00:08:07.658 element at address: 0x200000325280 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_92 00:08:07.658 element at address: 0x200000325040 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_93 00:08:07.658 element at address: 0x200000324d00 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_46 00:08:07.658 element at address: 0x200000321500 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_94 00:08:07.658 element at address: 0x2000003212c0 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_95 00:08:07.658 element at address: 0x200000320f80 with size: 0.000305 MiB 00:08:07.658 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_47 00:08:07.658 element at address: 0x2000003d5900 with size: 0.000244 MiB 00:08:07.658 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:08:07.658 10:50:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:07.658 10:50:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3482529 00:08:07.658 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3482529 ']' 00:08:07.658 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3482529 00:08:07.658 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:08:07.658 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.658 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3482529 00:08:07.658 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:07.658 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:07.658 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3482529' 00:08:07.658 killing process with pid 3482529 00:08:07.658 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3482529 00:08:07.658 10:50:14 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3482529 00:08:10.973 00:08:10.973 real 0m5.369s 00:08:10.973 user 0m5.225s 00:08:10.973 sys 0m0.723s 00:08:10.973 10:50:17 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.973 10:50:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:10.973 ************************************ 00:08:10.973 END TEST dpdk_mem_utility 00:08:10.973 ************************************ 00:08:10.973 10:50:17 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:08:10.973 10:50:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.973 10:50:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.973 10:50:17 -- common/autotest_common.sh@10 -- # set +x 00:08:10.973 ************************************ 00:08:10.973 START TEST event 00:08:10.973 ************************************ 00:08:10.973 10:50:17 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:08:10.973 * Looking for test storage... 00:08:10.973 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:08:10.973 10:50:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:10.973 10:50:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:10.973 10:50:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:10.973 10:50:18 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:10.973 10:50:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.973 10:50:18 event -- common/autotest_common.sh@10 -- # set +x 00:08:10.973 ************************************ 00:08:10.974 START TEST event_perf 00:08:10.974 ************************************ 00:08:10.974 10:50:18 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:11.233 Running I/O for 1 seconds...[2024-07-25 10:50:18.134667] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:11.233 [2024-07-25 10:50:18.134774] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3483393 ] 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:11.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.233 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:11.234 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:11.234 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:11.493 [2024-07-25 10:50:18.362243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.752 [2024-07-25 10:50:18.662067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.752 [2024-07-25 10:50:18.662149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.752 [2024-07-25 10:50:18.662247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.752 [2024-07-25 10:50:18.662254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.130 Running I/O for 1 seconds... 00:08:13.130 lcore 0: 178432 00:08:13.130 lcore 1: 178432 00:08:13.130 lcore 2: 178434 00:08:13.130 lcore 3: 178432 00:08:13.130 done. 00:08:13.130 00:08:13.130 real 0m2.138s 00:08:13.130 user 0m4.869s 00:08:13.130 sys 0m0.260s 00:08:13.130 10:50:20 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.130 10:50:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:13.130 ************************************ 00:08:13.130 END TEST event_perf 00:08:13.130 ************************************ 00:08:13.390 10:50:20 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:13.390 10:50:20 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:13.390 10:50:20 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.390 10:50:20 event -- common/autotest_common.sh@10 -- # set +x 00:08:13.390 ************************************ 00:08:13.390 START TEST event_reactor 00:08:13.390 ************************************ 00:08:13.390 10:50:20 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:13.390 [2024-07-25 10:50:20.357347] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:13.390 [2024-07-25 10:50:20.357449] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3483929 ] 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:13.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:13.390 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:13.649 [2024-07-25 10:50:20.582192] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.909 [2024-07-25 10:50:20.862487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.287 test_start 00:08:15.287 oneshot 00:08:15.287 tick 100 00:08:15.287 tick 100 00:08:15.287 tick 250 00:08:15.287 tick 100 00:08:15.287 tick 100 00:08:15.287 tick 100 00:08:15.287 tick 250 00:08:15.287 tick 500 00:08:15.287 tick 100 00:08:15.287 tick 100 00:08:15.287 tick 250 00:08:15.287 tick 100 00:08:15.287 tick 100 00:08:15.287 test_end 00:08:15.547 00:08:15.547 real 0m2.109s 00:08:15.547 user 0m1.857s 00:08:15.547 sys 0m0.242s 00:08:15.547 10:50:22 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.547 10:50:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:15.547 ************************************ 00:08:15.547 END TEST event_reactor 00:08:15.547 ************************************ 00:08:15.547 10:50:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:15.547 10:50:22 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:15.547 10:50:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.547 10:50:22 event -- common/autotest_common.sh@10 -- # set +x 00:08:15.547 ************************************ 00:08:15.547 START TEST event_reactor_perf 00:08:15.547 ************************************ 00:08:15.547 10:50:22 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:15.547 [2024-07-25 10:50:22.542541] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:15.547 [2024-07-25 10:50:22.542649] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484219 ] 00:08:15.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.807 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:15.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.807 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:15.807 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.807 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:15.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:15.808 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:15.808 [2024-07-25 10:50:22.772035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.067 [2024-07-25 10:50:23.033796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.454 test_start 00:08:17.454 test_end 00:08:17.454 Performance: 273983 events per second 00:08:17.454 00:08:17.454 real 0m2.076s 00:08:17.454 user 0m1.820s 00:08:17.454 sys 0m0.245s 00:08:17.454 10:50:24 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.454 10:50:24 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:17.454 ************************************ 00:08:17.454 END TEST event_reactor_perf 00:08:17.454 ************************************ 00:08:17.714 10:50:24 event -- event/event.sh@49 -- # uname -s 00:08:17.714 10:50:24 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:17.714 10:50:24 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:17.714 10:50:24 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.714 10:50:24 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.714 10:50:24 event -- common/autotest_common.sh@10 -- # set +x 00:08:17.714 ************************************ 00:08:17.714 START TEST event_scheduler 00:08:17.714 ************************************ 00:08:17.714 10:50:24 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:17.714 * Looking for test storage... 00:08:17.714 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:08:17.714 10:50:24 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:17.714 10:50:24 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3484688 00:08:17.714 10:50:24 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:17.714 10:50:24 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:17.714 10:50:24 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3484688 00:08:17.714 10:50:24 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3484688 ']' 00:08:17.714 10:50:24 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.714 10:50:24 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.714 10:50:24 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.714 10:50:24 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.714 10:50:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:17.974 [2024-07-25 10:50:24.883500] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:17.974 [2024-07-25 10:50:24.883625] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484688 ] 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:17.974 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:17.974 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:17.974 [2024-07-25 10:50:25.069554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.233 [2024-07-25 10:50:25.275567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.233 [2024-07-25 10:50:25.275609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.233 [2024-07-25 10:50:25.275664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.233 [2024-07-25 10:50:25.275672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.802 10:50:25 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.802 10:50:25 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:08:18.802 10:50:25 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:18.802 10:50:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.802 10:50:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:18.802 [2024-07-25 10:50:25.749924] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:18.802 [2024-07-25 10:50:25.749955] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:08:18.802 [2024-07-25 10:50:25.749972] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:18.802 [2024-07-25 10:50:25.749985] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:18.802 [2024-07-25 10:50:25.749995] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:18.802 10:50:25 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.802 10:50:25 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:18.802 10:50:25 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.802 10:50:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:19.061 [2024-07-25 10:50:26.109735] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:19.061 10:50:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.061 10:50:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:19.061 10:50:26 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.061 10:50:26 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.062 10:50:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:19.062 ************************************ 00:08:19.062 START TEST scheduler_create_thread 00:08:19.062 ************************************ 00:08:19.062 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:08:19.062 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:19.062 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.062 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.062 2 00:08:19.062 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.062 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:19.062 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.062 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.062 3 00:08:19.062 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.062 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:19.062 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 4 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 5 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 6 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 7 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 8 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 9 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 10 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.321 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.889 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.889 10:50:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:19.889 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.889 10:50:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:21.266 10:50:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.266 10:50:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:21.266 10:50:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:21.266 10:50:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.266 10:50:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:22.203 10:50:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.203 00:08:22.203 real 0m3.107s 00:08:22.203 user 0m0.023s 00:08:22.203 sys 0m0.009s 00:08:22.203 10:50:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.203 10:50:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:22.203 ************************************ 00:08:22.203 END TEST scheduler_create_thread 00:08:22.203 ************************************ 00:08:22.203 10:50:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:22.203 10:50:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3484688 00:08:22.203 10:50:29 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3484688 ']' 00:08:22.203 10:50:29 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3484688 00:08:22.203 10:50:29 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:08:22.203 10:50:29 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.203 10:50:29 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3484688 00:08:22.462 10:50:29 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:22.462 10:50:29 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:22.462 10:50:29 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3484688' 00:08:22.462 killing process with pid 3484688 00:08:22.462 10:50:29 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3484688 00:08:22.462 10:50:29 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3484688 00:08:22.721 [2024-07-25 10:50:29.639712] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:24.138 00:08:24.138 real 0m6.283s 00:08:24.138 user 0m12.263s 00:08:24.138 sys 0m0.664s 00:08:24.138 10:50:30 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.138 10:50:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:24.138 ************************************ 00:08:24.138 END TEST event_scheduler 00:08:24.138 ************************************ 00:08:24.138 10:50:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:24.138 10:50:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:24.138 10:50:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.138 10:50:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.138 10:50:30 event -- common/autotest_common.sh@10 -- # set +x 00:08:24.138 ************************************ 00:08:24.138 START TEST app_repeat 00:08:24.138 ************************************ 00:08:24.138 10:50:31 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3485836 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3485836' 00:08:24.138 Process app_repeat pid: 3485836 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:24.138 10:50:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:24.138 spdk_app_start Round 0 00:08:24.139 10:50:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3485836 /var/tmp/spdk-nbd.sock 00:08:24.139 10:50:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3485836 ']' 00:08:24.139 10:50:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:24.139 10:50:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:24.139 10:50:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:24.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:24.139 10:50:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:24.139 10:50:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:24.139 [2024-07-25 10:50:31.103704] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:24.139 [2024-07-25 10:50:31.103818] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3485836 ] 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:24.403 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:24.403 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:24.403 [2024-07-25 10:50:31.330731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:24.662 [2024-07-25 10:50:31.620627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.662 [2024-07-25 10:50:31.620635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.230 10:50:32 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.230 10:50:32 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:25.230 10:50:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:25.489 Malloc0 00:08:25.489 10:50:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:25.748 Malloc1 00:08:25.748 10:50:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.748 10:50:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:26.007 /dev/nbd0 00:08:26.007 10:50:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:26.007 10:50:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:26.007 1+0 records in 00:08:26.007 1+0 records out 00:08:26.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169559 s, 24.2 MB/s 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:26.007 10:50:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:26.007 10:50:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:26.007 10:50:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:26.007 10:50:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:26.267 /dev/nbd1 00:08:26.267 10:50:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:26.267 10:50:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:26.267 1+0 records in 00:08:26.267 1+0 records out 00:08:26.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252441 s, 16.2 MB/s 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:26.267 10:50:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:26.267 10:50:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:26.267 10:50:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:26.267 10:50:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:26.267 10:50:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.267 10:50:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:26.526 { 00:08:26.526 "nbd_device": "/dev/nbd0", 00:08:26.526 "bdev_name": "Malloc0" 00:08:26.526 }, 00:08:26.526 { 00:08:26.526 "nbd_device": "/dev/nbd1", 00:08:26.526 "bdev_name": "Malloc1" 00:08:26.526 } 00:08:26.526 ]' 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:26.526 { 00:08:26.526 "nbd_device": "/dev/nbd0", 00:08:26.526 "bdev_name": "Malloc0" 00:08:26.526 }, 00:08:26.526 { 00:08:26.526 "nbd_device": "/dev/nbd1", 00:08:26.526 "bdev_name": "Malloc1" 00:08:26.526 } 00:08:26.526 ]' 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:26.526 /dev/nbd1' 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:26.526 /dev/nbd1' 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:26.526 256+0 records in 00:08:26.526 256+0 records out 00:08:26.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105346 s, 99.5 MB/s 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:26.526 256+0 records in 00:08:26.526 256+0 records out 00:08:26.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204038 s, 51.4 MB/s 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:26.526 256+0 records in 00:08:26.526 256+0 records out 00:08:26.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242789 s, 43.2 MB/s 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.526 10:50:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:26.785 10:50:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:26.785 10:50:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:26.785 10:50:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:26.785 10:50:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:26.785 10:50:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:26.785 10:50:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:26.785 10:50:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:26.785 10:50:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:26.785 10:50:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:26.785 10:50:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:27.044 10:50:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:27.044 10:50:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:27.044 10:50:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:27.044 10:50:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:27.044 10:50:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:27.044 10:50:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:27.044 10:50:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:27.044 10:50:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:27.044 10:50:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:27.044 10:50:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.044 10:50:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:27.302 10:50:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:27.302 10:50:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:27.302 10:50:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:27.302 10:50:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:27.302 10:50:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:27.302 10:50:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:27.302 10:50:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:27.303 10:50:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:27.303 10:50:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:27.303 10:50:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:27.303 10:50:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:27.303 10:50:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:27.303 10:50:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:27.871 10:50:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:29.773 [2024-07-25 10:50:36.863591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:30.032 [2024-07-25 10:50:37.134568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.032 [2024-07-25 10:50:37.134570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.600 [2024-07-25 10:50:37.430820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:30.600 [2024-07-25 10:50:37.430882] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:30.858 10:50:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:30.858 10:50:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:30.858 spdk_app_start Round 1 00:08:30.858 10:50:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3485836 /var/tmp/spdk-nbd.sock 00:08:30.858 10:50:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3485836 ']' 00:08:30.858 10:50:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:30.858 10:50:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.858 10:50:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:30.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:30.858 10:50:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.858 10:50:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:31.116 10:50:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.116 10:50:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:31.116 10:50:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:31.374 Malloc0 00:08:31.374 10:50:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:31.942 Malloc1 00:08:31.942 10:50:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.942 10:50:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:31.942 /dev/nbd0 00:08:31.942 10:50:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:31.942 10:50:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:31.942 1+0 records in 00:08:31.942 1+0 records out 00:08:31.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026482 s, 15.5 MB/s 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:31.942 10:50:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:31.942 10:50:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:31.942 10:50:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:31.942 10:50:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:32.201 /dev/nbd1 00:08:32.201 10:50:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:32.201 10:50:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:32.201 1+0 records in 00:08:32.201 1+0 records out 00:08:32.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268157 s, 15.3 MB/s 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:32.201 10:50:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:32.201 10:50:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:32.201 10:50:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:32.201 10:50:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:32.201 10:50:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.201 10:50:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:32.460 { 00:08:32.460 "nbd_device": "/dev/nbd0", 00:08:32.460 "bdev_name": "Malloc0" 00:08:32.460 }, 00:08:32.460 { 00:08:32.460 "nbd_device": "/dev/nbd1", 00:08:32.460 "bdev_name": "Malloc1" 00:08:32.460 } 00:08:32.460 ]' 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:32.460 { 00:08:32.460 "nbd_device": "/dev/nbd0", 00:08:32.460 "bdev_name": "Malloc0" 00:08:32.460 }, 00:08:32.460 { 00:08:32.460 "nbd_device": "/dev/nbd1", 00:08:32.460 "bdev_name": "Malloc1" 00:08:32.460 } 00:08:32.460 ]' 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:32.460 /dev/nbd1' 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:32.460 /dev/nbd1' 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:32.460 256+0 records in 00:08:32.460 256+0 records out 00:08:32.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106384 s, 98.6 MB/s 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:32.460 10:50:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:32.460 256+0 records in 00:08:32.460 256+0 records out 00:08:32.460 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194689 s, 53.9 MB/s 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:32.719 256+0 records in 00:08:32.719 256+0 records out 00:08:32.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252374 s, 41.5 MB/s 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.719 10:50:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:32.978 10:50:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:32.978 10:50:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:32.978 10:50:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:32.978 10:50:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.978 10:50:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.978 10:50:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:32.978 10:50:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:32.978 10:50:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.978 10:50:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:32.978 10:50:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.978 10:50:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:33.237 10:50:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:33.237 10:50:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:33.237 10:50:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:33.496 10:50:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:33.496 10:50:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:33.496 10:50:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:33.496 10:50:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:33.496 10:50:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:33.496 10:50:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:33.496 10:50:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:33.496 10:50:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:33.496 10:50:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:33.497 10:50:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:33.756 10:50:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:36.292 [2024-07-25 10:50:42.795065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:36.292 [2024-07-25 10:50:43.066134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.292 [2024-07-25 10:50:43.066149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.292 [2024-07-25 10:50:43.370321] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:36.292 [2024-07-25 10:50:43.370375] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:36.859 10:50:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:36.859 10:50:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:36.859 spdk_app_start Round 2 00:08:36.859 10:50:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3485836 /var/tmp/spdk-nbd.sock 00:08:36.859 10:50:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3485836 ']' 00:08:36.859 10:50:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:36.859 10:50:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.859 10:50:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:36.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:36.859 10:50:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.859 10:50:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:37.118 10:50:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.118 10:50:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:37.118 10:50:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:37.377 Malloc0 00:08:37.377 10:50:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:37.636 Malloc1 00:08:37.636 10:50:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.636 10:50:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:37.895 /dev/nbd0 00:08:37.895 10:50:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:37.895 10:50:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:37.895 1+0 records in 00:08:37.895 1+0 records out 00:08:37.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025802 s, 15.9 MB/s 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:37.895 10:50:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:37.895 10:50:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.895 10:50:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.895 10:50:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:38.153 /dev/nbd1 00:08:38.153 10:50:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:38.153 10:50:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:38.153 10:50:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:38.153 10:50:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:38.153 10:50:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:38.153 10:50:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:38.153 10:50:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:38.153 10:50:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:38.153 10:50:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:38.153 10:50:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:38.153 10:50:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:38.153 1+0 records in 00:08:38.153 1+0 records out 00:08:38.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239485 s, 17.1 MB/s 00:08:38.154 10:50:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:38.154 10:50:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:38.154 10:50:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:08:38.154 10:50:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:38.154 10:50:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:38.154 10:50:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:38.154 10:50:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:38.154 10:50:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:38.154 10:50:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.154 10:50:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.412 10:50:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:38.412 { 00:08:38.412 "nbd_device": "/dev/nbd0", 00:08:38.412 "bdev_name": "Malloc0" 00:08:38.412 }, 00:08:38.412 { 00:08:38.412 "nbd_device": "/dev/nbd1", 00:08:38.412 "bdev_name": "Malloc1" 00:08:38.412 } 00:08:38.412 ]' 00:08:38.412 10:50:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:38.412 { 00:08:38.412 "nbd_device": "/dev/nbd0", 00:08:38.412 "bdev_name": "Malloc0" 00:08:38.412 }, 00:08:38.412 { 00:08:38.412 "nbd_device": "/dev/nbd1", 00:08:38.412 "bdev_name": "Malloc1" 00:08:38.412 } 00:08:38.412 ]' 00:08:38.412 10:50:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:38.412 10:50:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:38.412 /dev/nbd1' 00:08:38.412 10:50:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:38.412 /dev/nbd1' 00:08:38.412 10:50:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:38.412 10:50:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:38.412 10:50:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:38.412 10:50:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:38.413 10:50:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:38.413 10:50:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:38.413 10:50:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.413 10:50:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.413 10:50:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:38.413 10:50:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:38.413 10:50:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:38.413 10:50:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:38.413 256+0 records in 00:08:38.413 256+0 records out 00:08:38.413 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109312 s, 95.9 MB/s 00:08:38.413 10:50:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:38.703 256+0 records in 00:08:38.703 256+0 records out 00:08:38.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209163 s, 50.1 MB/s 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:38.703 256+0 records in 00:08:38.703 256+0 records out 00:08:38.703 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023731 s, 44.2 MB/s 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.703 10:50:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:38.962 10:50:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:38.962 10:50:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:38.962 10:50:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:38.962 10:50:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.962 10:50:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.962 10:50:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:38.962 10:50:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:38.962 10:50:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.962 10:50:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.962 10:50:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:39.220 10:50:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:39.220 10:50:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:39.220 10:50:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:39.220 10:50:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:39.220 10:50:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:39.220 10:50:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:39.220 10:50:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:39.220 10:50:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:39.220 10:50:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:39.220 10:50:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:39.220 10:50:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:39.220 10:50:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:39.221 10:50:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:39.221 10:50:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:39.221 10:50:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:39.221 10:50:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:39.221 10:50:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:39.221 10:50:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:39.221 10:50:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:39.221 10:50:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:39.221 10:50:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:39.221 10:50:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:39.221 10:50:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:39.221 10:50:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:39.788 10:50:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:41.694 [2024-07-25 10:50:48.662369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.952 [2024-07-25 10:50:48.933031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.952 [2024-07-25 10:50:48.933034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.211 [2024-07-25 10:50:49.236329] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:42.211 [2024-07-25 10:50:49.236385] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:42.777 10:50:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3485836 /var/tmp/spdk-nbd.sock 00:08:42.777 10:50:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3485836 ']' 00:08:42.777 10:50:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:42.777 10:50:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.777 10:50:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:42.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:42.777 10:50:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.777 10:50:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:43.036 10:50:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.036 10:50:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:43.036 10:50:50 event.app_repeat -- event/event.sh@39 -- # killprocess 3485836 00:08:43.036 10:50:50 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3485836 ']' 00:08:43.036 10:50:50 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3485836 00:08:43.036 10:50:50 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:43.036 10:50:50 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.036 10:50:50 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3485836 00:08:43.036 10:50:50 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.036 10:50:50 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.036 10:50:50 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3485836' 00:08:43.036 killing process with pid 3485836 00:08:43.036 10:50:50 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3485836 00:08:43.036 10:50:50 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3485836 00:08:44.940 spdk_app_start is called in Round 0. 00:08:44.940 Shutdown signal received, stop current app iteration 00:08:44.940 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:08:44.940 spdk_app_start is called in Round 1. 00:08:44.940 Shutdown signal received, stop current app iteration 00:08:44.940 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:08:44.940 spdk_app_start is called in Round 2. 00:08:44.940 Shutdown signal received, stop current app iteration 00:08:44.940 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:08:44.940 spdk_app_start is called in Round 3. 00:08:44.940 Shutdown signal received, stop current app iteration 00:08:44.940 10:50:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:44.940 10:50:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:44.940 00:08:44.940 real 0m20.655s 00:08:44.940 user 0m41.138s 00:08:44.940 sys 0m3.701s 00:08:44.940 10:50:51 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.940 10:50:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:44.940 ************************************ 00:08:44.940 END TEST app_repeat 00:08:44.940 ************************************ 00:08:44.940 10:50:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:44.940 00:08:44.940 real 0m33.794s 00:08:44.940 user 1m2.129s 00:08:44.940 sys 0m5.508s 00:08:44.940 10:50:51 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.940 10:50:51 event -- common/autotest_common.sh@10 -- # set +x 00:08:44.941 ************************************ 00:08:44.941 END TEST event 00:08:44.941 ************************************ 00:08:44.941 10:50:51 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:08:44.941 10:50:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.941 10:50:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.941 10:50:51 -- common/autotest_common.sh@10 -- # set +x 00:08:44.941 ************************************ 00:08:44.941 START TEST thread 00:08:44.941 ************************************ 00:08:44.941 10:50:51 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:08:44.941 * Looking for test storage... 00:08:44.941 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:08:44.941 10:50:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:44.941 10:50:51 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:44.941 10:50:51 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.941 10:50:51 thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.941 ************************************ 00:08:44.941 START TEST thread_poller_perf 00:08:44.941 ************************************ 00:08:44.941 10:50:51 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:44.941 [2024-07-25 10:50:51.999200] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:44.941 [2024-07-25 10:50:51.999303] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3489575 ] 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.199 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:45.199 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.200 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:45.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.200 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:45.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.200 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:45.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.200 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:45.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.200 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:45.200 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:45.200 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:45.200 [2024-07-25 10:50:52.224736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.458 [2024-07-25 10:50:52.509279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.458 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:47.360 ====================================== 00:08:47.360 busy:2512024790 (cyc) 00:08:47.360 total_run_count: 280000 00:08:47.360 tsc_hz: 2500000000 (cyc) 00:08:47.360 ====================================== 00:08:47.360 poller_cost: 8971 (cyc), 3588 (nsec) 00:08:47.360 00:08:47.360 real 0m2.122s 00:08:47.360 user 0m1.884s 00:08:47.360 sys 0m0.228s 00:08:47.360 10:50:54 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.360 10:50:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:47.360 ************************************ 00:08:47.361 END TEST thread_poller_perf 00:08:47.361 ************************************ 00:08:47.361 10:50:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:47.361 10:50:54 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:47.361 10:50:54 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.361 10:50:54 thread -- common/autotest_common.sh@10 -- # set +x 00:08:47.361 ************************************ 00:08:47.361 START TEST thread_poller_perf 00:08:47.361 ************************************ 00:08:47.361 10:50:54 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:47.361 [2024-07-25 10:50:54.214363] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:47.361 [2024-07-25 10:50:54.214469] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490052 ] 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:47.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:47.361 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:47.361 [2024-07-25 10:50:54.438974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.620 [2024-07-25 10:50:54.723555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.620 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:49.524 ====================================== 00:08:49.524 busy:2504078176 (cyc) 00:08:49.524 total_run_count: 3658000 00:08:49.524 tsc_hz: 2500000000 (cyc) 00:08:49.524 ====================================== 00:08:49.524 poller_cost: 684 (cyc), 273 (nsec) 00:08:49.524 00:08:49.524 real 0m2.125s 00:08:49.524 user 0m1.873s 00:08:49.524 sys 0m0.242s 00:08:49.524 10:50:56 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.524 10:50:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:49.524 ************************************ 00:08:49.524 END TEST thread_poller_perf 00:08:49.524 ************************************ 00:08:49.524 10:50:56 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:49.524 00:08:49.524 real 0m4.514s 00:08:49.524 user 0m3.861s 00:08:49.524 sys 0m0.656s 00:08:49.524 10:50:56 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.524 10:50:56 thread -- common/autotest_common.sh@10 -- # set +x 00:08:49.524 ************************************ 00:08:49.524 END TEST thread 00:08:49.524 ************************************ 00:08:49.524 10:50:56 -- spdk/autotest.sh@184 -- # [[ 1 -eq 1 ]] 00:08:49.524 10:50:56 -- spdk/autotest.sh@185 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:08:49.524 10:50:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:49.524 10:50:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.524 10:50:56 -- common/autotest_common.sh@10 -- # set +x 00:08:49.524 ************************************ 00:08:49.524 START TEST accel 00:08:49.524 ************************************ 00:08:49.524 10:50:56 accel -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:08:49.524 * Looking for test storage... 00:08:49.524 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:08:49.524 10:50:56 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:08:49.524 10:50:56 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:08:49.524 10:50:56 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:49.524 10:50:56 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3490442 00:08:49.524 10:50:56 accel -- accel/accel.sh@61 -- # build_accel_config 00:08:49.524 10:50:56 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:08:49.524 10:50:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:49.524 10:50:56 accel -- accel/accel.sh@63 -- # waitforlisten 3490442 00:08:49.524 10:50:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:49.524 10:50:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:49.524 10:50:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:49.524 10:50:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:49.524 10:50:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:49.524 10:50:56 accel -- common/autotest_common.sh@831 -- # '[' -z 3490442 ']' 00:08:49.524 10:50:56 accel -- accel/accel.sh@41 -- # jq -r . 00:08:49.524 10:50:56 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.524 10:50:56 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.524 10:50:56 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.524 10:50:56 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.524 10:50:56 accel -- common/autotest_common.sh@10 -- # set +x 00:08:49.524 [2024-07-25 10:50:56.633115] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:49.525 [2024-07-25 10:50:56.633244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3490442 ] 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.784 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:49.784 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:49.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:49.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:49.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:49.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:49.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:49.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:49.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:49.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:49.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:49.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:49.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:49.785 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:49.785 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:49.785 [2024-07-25 10:50:56.858226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.044 [2024-07-25 10:50:57.143784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.423 10:50:58 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@864 -- # return 0 00:08:51.424 10:50:58 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:08:51.424 10:50:58 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:08:51.424 10:50:58 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:08:51.424 10:50:58 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:08:51.424 10:50:58 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:08:51.424 10:50:58 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@10 -- # set +x 00:08:51.424 10:50:58 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # IFS== 00:08:51.424 10:50:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:08:51.424 10:50:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:08:51.424 10:50:58 accel -- accel/accel.sh@75 -- # killprocess 3490442 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@950 -- # '[' -z 3490442 ']' 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@954 -- # kill -0 3490442 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@955 -- # uname 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3490442 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3490442' 00:08:51.424 killing process with pid 3490442 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@969 -- # kill 3490442 00:08:51.424 10:50:58 accel -- common/autotest_common.sh@974 -- # wait 3490442 00:08:54.714 10:51:01 accel -- accel/accel.sh@76 -- # trap - ERR 00:08:54.714 10:51:01 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:54.714 10:51:01 accel -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:54.714 10:51:01 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.714 10:51:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:54.714 10:51:01 accel.accel_help -- common/autotest_common.sh@1125 -- # accel_perf -h 00:08:54.714 10:51:01 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:54.714 10:51:01 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:08:54.714 10:51:01 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:54.714 10:51:01 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:54.714 10:51:01 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.714 10:51:01 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.714 10:51:01 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:54.714 10:51:01 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:08:54.714 10:51:01 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:08:54.973 10:51:01 accel.accel_help -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.973 10:51:01 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:08:54.973 10:51:01 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:54.973 10:51:01 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:54.973 10:51:01 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.973 10:51:01 accel -- common/autotest_common.sh@10 -- # set +x 00:08:54.973 ************************************ 00:08:54.973 START TEST accel_missing_filename 00:08:54.973 ************************************ 00:08:54.973 10:51:01 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress 00:08:54.973 10:51:01 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # local es=0 00:08:54.973 10:51:01 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:54.973 10:51:01 accel.accel_missing_filename -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:54.973 10:51:01 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.973 10:51:01 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:54.973 10:51:01 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.973 10:51:01 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:08:54.973 10:51:01 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:54.973 10:51:01 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:08:54.973 10:51:01 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:54.973 10:51:01 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:54.973 10:51:01 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.973 10:51:01 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.973 10:51:01 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:54.973 10:51:01 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:08:54.973 10:51:01 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:08:54.973 [2024-07-25 10:51:01.991431] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:54.973 [2024-07-25 10:51:01.991533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3491605 ] 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:55.232 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.232 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:55.233 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:55.233 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:55.233 [2024-07-25 10:51:02.215540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.492 [2024-07-25 10:51:02.504472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.776 [2024-07-25 10:51:02.842786] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:56.727 [2024-07-25 10:51:03.570874] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:56.986 A filename is required. 00:08:56.986 10:51:04 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # es=234 00:08:56.986 10:51:04 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:56.986 10:51:04 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=106 00:08:56.986 10:51:04 accel.accel_missing_filename -- common/autotest_common.sh@663 -- # case "$es" in 00:08:56.986 10:51:04 accel.accel_missing_filename -- common/autotest_common.sh@670 -- # es=1 00:08:56.986 10:51:04 accel.accel_missing_filename -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:56.986 00:08:56.986 real 0m2.167s 00:08:56.986 user 0m1.884s 00:08:56.986 sys 0m0.283s 00:08:56.986 10:51:04 accel.accel_missing_filename -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.986 10:51:04 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:08:56.986 ************************************ 00:08:56.986 END TEST accel_missing_filename 00:08:56.986 ************************************ 00:08:57.245 10:51:04 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:57.245 10:51:04 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:57.245 10:51:04 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.245 10:51:04 accel -- common/autotest_common.sh@10 -- # set +x 00:08:57.245 ************************************ 00:08:57.245 START TEST accel_compress_verify 00:08:57.245 ************************************ 00:08:57.245 10:51:04 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:57.246 10:51:04 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # local es=0 00:08:57.246 10:51:04 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:57.246 10:51:04 accel.accel_compress_verify -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:57.246 10:51:04 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.246 10:51:04 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:57.246 10:51:04 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:57.246 10:51:04 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:57.246 10:51:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:08:57.246 10:51:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:08:57.246 10:51:04 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:57.246 10:51:04 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:57.246 10:51:04 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:57.246 10:51:04 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:57.246 10:51:04 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:57.246 10:51:04 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:08:57.246 10:51:04 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:08:57.246 [2024-07-25 10:51:04.239739] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:57.246 [2024-07-25 10:51:04.239850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492071 ] 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:01.0 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:01.1 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:01.2 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:01.3 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:01.4 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:01.5 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:01.6 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:01.7 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:02.0 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:02.1 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:02.2 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:02.3 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:02.4 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:02.5 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:02.6 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3d:02.7 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:01.0 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:01.1 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:01.2 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:01.3 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:01.4 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:01.5 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:01.6 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:01.7 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:02.0 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:02.1 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:02.2 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:02.3 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:02.4 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.505 EAL: Requested device 0000:3f:02.5 cannot be used 00:08:57.505 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.506 EAL: Requested device 0000:3f:02.6 cannot be used 00:08:57.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:08:57.506 EAL: Requested device 0000:3f:02.7 cannot be used 00:08:57.506 [2024-07-25 10:51:04.467073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.765 [2024-07-25 10:51:04.756451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.023 [2024-07-25 10:51:05.099847] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:58.962 [2024-07-25 10:51:05.862971] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:08:59.531 00:08:59.531 Compression does not support the verify option, aborting. 00:08:59.531 10:51:06 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # es=161 00:08:59.531 10:51:06 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.531 10:51:06 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=33 00:08:59.531 10:51:06 accel.accel_compress_verify -- common/autotest_common.sh@663 -- # case "$es" in 00:08:59.531 10:51:06 accel.accel_compress_verify -- common/autotest_common.sh@670 -- # es=1 00:08:59.531 10:51:06 accel.accel_compress_verify -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.531 00:08:59.531 real 0m2.239s 00:08:59.531 user 0m1.941s 00:08:59.531 sys 0m0.316s 00:08:59.531 10:51:06 accel.accel_compress_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.531 10:51:06 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:08:59.531 ************************************ 00:08:59.531 END TEST accel_compress_verify 00:08:59.531 ************************************ 00:08:59.531 10:51:06 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:59.531 10:51:06 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:59.531 10:51:06 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.531 10:51:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:59.531 ************************************ 00:08:59.531 START TEST accel_wrong_workload 00:08:59.531 ************************************ 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w foobar 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # local es=0 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:08:59.531 10:51:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:08:59.531 10:51:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:59.531 10:51:06 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:59.531 10:51:06 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:59.531 10:51:06 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.531 10:51:06 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.531 10:51:06 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:59.531 10:51:06 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:08:59.531 10:51:06 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:08:59.531 Unsupported workload type: foobar 00:08:59.531 [2024-07-25 10:51:06.552688] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:59.531 accel_perf options: 00:08:59.531 [-h help message] 00:08:59.531 [-q queue depth per core] 00:08:59.531 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:59.531 [-T number of threads per core 00:08:59.531 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:59.531 [-t time in seconds] 00:08:59.531 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:59.531 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:59.531 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:59.531 [-l for compress/decompress workloads, name of uncompressed input file 00:08:59.531 [-S for crc32c workload, use this seed value (default 0) 00:08:59.531 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:59.531 [-f for fill workload, use this BYTE value (default 255) 00:08:59.531 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:59.531 [-y verify result if this switch is on] 00:08:59.531 [-a tasks to allocate per core (default: same value as -q)] 00:08:59.531 Can be used to spread operations across a wider range of memory. 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # es=1 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.531 00:08:59.531 real 0m0.090s 00:08:59.531 user 0m0.074s 00:08:59.531 sys 0m0.052s 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.531 10:51:06 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:08:59.531 ************************************ 00:08:59.531 END TEST accel_wrong_workload 00:08:59.531 ************************************ 00:08:59.531 10:51:06 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:59.531 10:51:06 accel -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:59.531 10:51:06 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.531 10:51:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 ************************************ 00:08:59.791 START TEST accel_negative_buffers 00:08:59.791 ************************************ 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # local es=0 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # type -t accel_perf 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:08:59.791 10:51:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:59.791 10:51:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:08:59.791 10:51:06 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:59.791 10:51:06 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:59.791 10:51:06 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.791 10:51:06 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.791 10:51:06 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:59.791 10:51:06 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:08:59.791 10:51:06 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:08:59.791 -x option must be non-negative. 00:08:59.791 [2024-07-25 10:51:06.724855] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:59.791 accel_perf options: 00:08:59.791 [-h help message] 00:08:59.791 [-q queue depth per core] 00:08:59.791 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:59.791 [-T number of threads per core 00:08:59.791 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:59.791 [-t time in seconds] 00:08:59.791 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:59.791 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:08:59.791 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:59.791 [-l for compress/decompress workloads, name of uncompressed input file 00:08:59.791 [-S for crc32c workload, use this seed value (default 0) 00:08:59.791 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:59.791 [-f for fill workload, use this BYTE value (default 255) 00:08:59.791 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:59.791 [-y verify result if this switch is on] 00:08:59.791 [-a tasks to allocate per core (default: same value as -q)] 00:08:59.791 Can be used to spread operations across a wider range of memory. 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # es=1 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:59.791 00:08:59.791 real 0m0.082s 00:08:59.791 user 0m0.075s 00:08:59.791 sys 0m0.048s 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.791 10:51:06 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 ************************************ 00:08:59.791 END TEST accel_negative_buffers 00:08:59.791 ************************************ 00:08:59.791 10:51:06 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:59.791 10:51:06 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:59.791 10:51:06 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.791 10:51:06 accel -- common/autotest_common.sh@10 -- # set +x 00:08:59.791 ************************************ 00:08:59.791 START TEST accel_crc32c 00:08:59.791 ************************************ 00:08:59.791 10:51:06 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:59.791 10:51:06 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:59.791 [2024-07-25 10:51:06.888020] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:59.791 [2024-07-25 10:51:06.888125] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3492943 ] 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:00.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:00.051 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:00.051 [2024-07-25 10:51:07.111428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.310 [2024-07-25 10:51:07.393415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.878 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:00.879 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.879 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.879 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.879 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:00.879 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.879 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.879 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:00.879 10:51:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:00.879 10:51:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:00.879 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:00.879 10:51:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:03.415 10:51:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:03.415 00:09:03.415 real 0m3.237s 00:09:03.415 user 0m0.011s 00:09:03.415 sys 0m0.002s 00:09:03.415 10:51:10 accel.accel_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.415 10:51:10 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:03.415 ************************************ 00:09:03.415 END TEST accel_crc32c 00:09:03.415 ************************************ 00:09:03.415 10:51:10 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:03.415 10:51:10 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:09:03.415 10:51:10 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.415 10:51:10 accel -- common/autotest_common.sh@10 -- # set +x 00:09:03.415 ************************************ 00:09:03.415 START TEST accel_crc32c_C2 00:09:03.415 ************************************ 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:03.415 10:51:10 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:03.415 [2024-07-25 10:51:10.185725] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:03.415 [2024-07-25 10:51:10.185825] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3493510 ] 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:03.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:03.415 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:03.416 [2024-07-25 10:51:10.409102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.676 [2024-07-25 10:51:10.691241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:03.936 10:51:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:06.470 00:09:06.470 real 0m3.237s 00:09:06.470 user 0m0.010s 00:09:06.470 sys 0m0.002s 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.470 10:51:13 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:06.470 ************************************ 00:09:06.470 END TEST accel_crc32c_C2 00:09:06.470 ************************************ 00:09:06.470 10:51:13 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:06.470 10:51:13 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:06.470 10:51:13 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.470 10:51:13 accel -- common/autotest_common.sh@10 -- # set +x 00:09:06.470 ************************************ 00:09:06.470 START TEST accel_copy 00:09:06.470 ************************************ 00:09:06.470 10:51:13 accel.accel_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy -y 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:06.470 10:51:13 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:09:06.470 [2024-07-25 10:51:13.544388] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:06.470 [2024-07-25 10:51:13.544618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494056 ] 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:06.729 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:06.729 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:06.988 [2024-07-25 10:51:13.914100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.247 [2024-07-25 10:51:14.179711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.505 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:07.506 10:51:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:09:10.039 10:51:16 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:10.039 00:09:10.039 real 0m3.386s 00:09:10.039 user 0m0.011s 00:09:10.039 sys 0m0.000s 00:09:10.039 10:51:16 accel.accel_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.039 10:51:16 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:09:10.039 ************************************ 00:09:10.039 END TEST accel_copy 00:09:10.039 ************************************ 00:09:10.040 10:51:16 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:10.040 10:51:16 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:10.040 10:51:16 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.040 10:51:16 accel -- common/autotest_common.sh@10 -- # set +x 00:09:10.040 ************************************ 00:09:10.040 START TEST accel_fill 00:09:10.040 ************************************ 00:09:10.040 10:51:16 accel.accel_fill -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:09:10.040 10:51:16 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:09:10.040 [2024-07-25 10:51:16.961075] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:10.040 [2024-07-25 10:51:16.961184] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3494606 ] 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:10.040 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:10.040 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:10.299 [2024-07-25 10:51:17.186288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.558 [2024-07-25 10:51:17.469900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.817 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:10.818 10:51:17 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:09:13.377 10:51:20 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:13.377 00:09:13.377 real 0m3.227s 00:09:13.377 user 0m0.010s 00:09:13.377 sys 0m0.003s 00:09:13.377 10:51:20 accel.accel_fill -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:13.377 10:51:20 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:09:13.377 ************************************ 00:09:13.377 END TEST accel_fill 00:09:13.377 ************************************ 00:09:13.377 10:51:20 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:13.377 10:51:20 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:13.377 10:51:20 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:13.377 10:51:20 accel -- common/autotest_common.sh@10 -- # set +x 00:09:13.377 ************************************ 00:09:13.377 START TEST accel_copy_crc32c 00:09:13.377 ************************************ 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:13.377 10:51:20 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:13.377 [2024-07-25 10:51:20.263246] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:13.377 [2024-07-25 10:51:20.263347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495210 ] 00:09:13.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.377 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:13.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.377 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:13.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.377 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:13.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.377 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:13.378 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:13.378 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:13.378 [2024-07-25 10:51:20.487014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.946 [2024-07-25 10:51:20.769093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:14.205 10:51:21 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:16.740 00:09:16.740 real 0m3.232s 00:09:16.740 user 0m0.011s 00:09:16.740 sys 0m0.001s 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.740 10:51:23 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:16.740 ************************************ 00:09:16.740 END TEST accel_copy_crc32c 00:09:16.740 ************************************ 00:09:16.740 10:51:23 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:16.740 10:51:23 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:09:16.740 10:51:23 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.740 10:51:23 accel -- common/autotest_common.sh@10 -- # set +x 00:09:16.740 ************************************ 00:09:16.740 START TEST accel_copy_crc32c_C2 00:09:16.740 ************************************ 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:16.740 10:51:23 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:16.741 [2024-07-25 10:51:23.571152] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:16.741 [2024-07-25 10:51:23.571257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3495860 ] 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:16.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:16.741 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:16.741 [2024-07-25 10:51:23.797510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.000 [2024-07-25 10:51:24.060189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:17.569 10:51:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:20.107 00:09:20.107 real 0m3.214s 00:09:20.107 user 0m0.011s 00:09:20.107 sys 0m0.001s 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.107 10:51:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:09:20.107 ************************************ 00:09:20.107 END TEST accel_copy_crc32c_C2 00:09:20.107 ************************************ 00:09:20.107 10:51:26 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:20.107 10:51:26 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:20.107 10:51:26 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.107 10:51:26 accel -- common/autotest_common.sh@10 -- # set +x 00:09:20.107 ************************************ 00:09:20.107 START TEST accel_dualcast 00:09:20.107 ************************************ 00:09:20.107 10:51:26 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dualcast -y 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:09:20.107 10:51:26 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:09:20.107 [2024-07-25 10:51:26.858160] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:20.107 [2024-07-25 10:51:26.858264] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3496496 ] 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.107 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:20.107 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.108 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:20.108 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:20.108 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:20.108 [2024-07-25 10:51:27.078662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.366 [2024-07-25 10:51:27.361464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.624 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:20.625 10:51:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:09:23.155 10:51:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:09:23.155 10:51:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:23.155 10:51:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:09:23.155 10:51:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:23.155 00:09:23.155 real 0m3.212s 00:09:23.155 user 0m0.011s 00:09:23.155 sys 0m0.000s 00:09:23.155 10:51:30 accel.accel_dualcast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.155 10:51:30 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:09:23.155 ************************************ 00:09:23.155 END TEST accel_dualcast 00:09:23.155 ************************************ 00:09:23.155 10:51:30 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:23.155 10:51:30 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:23.155 10:51:30 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.155 10:51:30 accel -- common/autotest_common.sh@10 -- # set +x 00:09:23.155 ************************************ 00:09:23.155 START TEST accel_compare 00:09:23.155 ************************************ 00:09:23.155 10:51:30 accel.accel_compare -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compare -y 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:09:23.155 10:51:30 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:09:23.155 [2024-07-25 10:51:30.154386] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:23.155 [2024-07-25 10:51:30.154486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3497044 ] 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:23.414 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:23.414 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:23.414 [2024-07-25 10:51:30.377769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.673 [2024-07-25 10:51:30.668321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.932 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:23.933 10:51:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:09:26.465 10:51:33 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:26.465 00:09:26.465 real 0m3.222s 00:09:26.465 user 0m0.011s 00:09:26.465 sys 0m0.002s 00:09:26.465 10:51:33 accel.accel_compare -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.465 10:51:33 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:09:26.465 ************************************ 00:09:26.465 END TEST accel_compare 00:09:26.465 ************************************ 00:09:26.465 10:51:33 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:26.465 10:51:33 accel -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:26.465 10:51:33 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.465 10:51:33 accel -- common/autotest_common.sh@10 -- # set +x 00:09:26.465 ************************************ 00:09:26.465 START TEST accel_xor 00:09:26.465 ************************************ 00:09:26.465 10:51:33 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y 00:09:26.465 10:51:33 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:26.465 10:51:33 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:26.465 10:51:33 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:26.466 10:51:33 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:26.466 10:51:33 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:26.466 10:51:33 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:26.466 10:51:33 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:26.466 10:51:33 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:26.466 10:51:33 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:26.466 10:51:33 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:26.466 10:51:33 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:26.466 10:51:33 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:26.466 10:51:33 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:26.466 10:51:33 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:26.466 [2024-07-25 10:51:33.444312] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:26.466 [2024-07-25 10:51:33.444417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3497590 ] 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:26.466 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.466 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:26.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.725 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:26.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.725 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:26.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.725 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:26.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.725 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:26.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.725 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:26.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:26.725 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:26.725 [2024-07-25 10:51:33.667588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.984 [2024-07-25 10:51:33.952904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.243 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:27.243 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.243 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.243 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.243 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:27.243 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:27.244 10:51:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:29.776 00:09:29.776 real 0m3.262s 00:09:29.776 user 0m0.011s 00:09:29.776 sys 0m0.000s 00:09:29.776 10:51:36 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.776 10:51:36 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:29.776 ************************************ 00:09:29.776 END TEST accel_xor 00:09:29.776 ************************************ 00:09:29.776 10:51:36 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:29.776 10:51:36 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:09:29.776 10:51:36 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.776 10:51:36 accel -- common/autotest_common.sh@10 -- # set +x 00:09:29.776 ************************************ 00:09:29.776 START TEST accel_xor 00:09:29.776 ************************************ 00:09:29.776 10:51:36 accel.accel_xor -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w xor -y -x 3 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:29.776 10:51:36 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:29.776 [2024-07-25 10:51:36.777286] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:29.776 [2024-07-25 10:51:36.777386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3498148 ] 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.035 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:30.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.036 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:30.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:30.036 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:30.036 [2024-07-25 10:51:37.002265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.294 [2024-07-25 10:51:37.288381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.588 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:30.589 10:51:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:33.147 10:51:39 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:33.147 00:09:33.147 real 0m3.254s 00:09:33.147 user 0m0.010s 00:09:33.147 sys 0m0.001s 00:09:33.147 10:51:39 accel.accel_xor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.147 10:51:39 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:33.147 ************************************ 00:09:33.147 END TEST accel_xor 00:09:33.147 ************************************ 00:09:33.147 10:51:40 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:33.147 10:51:40 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:33.147 10:51:40 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.147 10:51:40 accel -- common/autotest_common.sh@10 -- # set +x 00:09:33.147 ************************************ 00:09:33.147 START TEST accel_dif_verify 00:09:33.147 ************************************ 00:09:33.147 10:51:40 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_verify 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:33.147 10:51:40 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:09:33.147 [2024-07-25 10:51:40.108118] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:33.147 [2024-07-25 10:51:40.108233] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3498696 ] 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:33.147 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.147 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:33.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.148 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:33.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.148 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:33.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.148 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:33.148 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:33.148 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:33.406 [2024-07-25 10:51:40.333193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.664 [2024-07-25 10:51:40.623794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:33.923 10:51:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:09:36.455 10:51:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:36.455 00:09:36.455 real 0m3.239s 00:09:36.455 user 0m0.012s 00:09:36.455 sys 0m0.002s 00:09:36.455 10:51:43 accel.accel_dif_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.455 10:51:43 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:09:36.455 ************************************ 00:09:36.455 END TEST accel_dif_verify 00:09:36.455 ************************************ 00:09:36.455 10:51:43 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:36.455 10:51:43 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:36.455 10:51:43 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.455 10:51:43 accel -- common/autotest_common.sh@10 -- # set +x 00:09:36.455 ************************************ 00:09:36.455 START TEST accel_dif_generate 00:09:36.455 ************************************ 00:09:36.455 10:51:43 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:09:36.455 10:51:43 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:09:36.455 [2024-07-25 10:51:43.402237] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:36.455 [2024-07-25 10:51:43.402340] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499243 ] 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.455 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:36.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:36.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:36.456 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:36.714 [2024-07-25 10:51:43.628505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.973 [2024-07-25 10:51:43.924561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.230 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:37.231 10:51:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:09:39.759 10:51:46 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:39.759 00:09:39.759 real 0m3.251s 00:09:39.759 user 0m0.011s 00:09:39.759 sys 0m0.001s 00:09:39.759 10:51:46 accel.accel_dif_generate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.759 10:51:46 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:09:39.759 ************************************ 00:09:39.759 END TEST accel_dif_generate 00:09:39.759 ************************************ 00:09:39.759 10:51:46 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:39.759 10:51:46 accel -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:39.759 10:51:46 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.759 10:51:46 accel -- common/autotest_common.sh@10 -- # set +x 00:09:39.759 ************************************ 00:09:39.760 START TEST accel_dif_generate_copy 00:09:39.760 ************************************ 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w dif_generate_copy 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:39.760 10:51:46 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:09:39.760 [2024-07-25 10:51:46.727404] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:39.760 [2024-07-25 10:51:46.727506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499835 ] 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:39.760 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:39.760 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:40.019 [2024-07-25 10:51:46.953784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.276 [2024-07-25 10:51:47.235571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:40.535 10:51:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:43.064 00:09:43.064 real 0m3.181s 00:09:43.064 user 0m0.013s 00:09:43.064 sys 0m0.000s 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.064 10:51:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:09:43.064 ************************************ 00:09:43.064 END TEST accel_dif_generate_copy 00:09:43.064 ************************************ 00:09:43.064 10:51:49 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:09:43.064 10:51:49 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:43.064 10:51:49 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:43.064 10:51:49 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.064 10:51:49 accel -- common/autotest_common.sh@10 -- # set +x 00:09:43.064 ************************************ 00:09:43.064 START TEST accel_comp 00:09:43.064 ************************************ 00:09:43.064 10:51:49 accel.accel_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:09:43.064 10:51:49 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:09:43.064 [2024-07-25 10:51:49.987233] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:43.064 [2024-07-25 10:51:49.987344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3500446 ] 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.064 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.064 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.064 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.064 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.064 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.064 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.064 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.064 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.064 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.064 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.064 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.064 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:43.064 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:43.065 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:43.065 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:43.322 [2024-07-25 10:51:50.214792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.579 [2024-07-25 10:51:50.488231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.836 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:43.837 10:51:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:09:46.365 10:51:53 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:46.365 00:09:46.365 real 0m3.264s 00:09:46.365 user 0m0.006s 00:09:46.365 sys 0m0.006s 00:09:46.365 10:51:53 accel.accel_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.365 10:51:53 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:09:46.365 ************************************ 00:09:46.365 END TEST accel_comp 00:09:46.365 ************************************ 00:09:46.365 10:51:53 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:46.365 10:51:53 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:09:46.365 10:51:53 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.365 10:51:53 accel -- common/autotest_common.sh@10 -- # set +x 00:09:46.365 ************************************ 00:09:46.365 START TEST accel_decomp 00:09:46.365 ************************************ 00:09:46.365 10:51:53 accel.accel_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:09:46.365 10:51:53 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:09:46.365 [2024-07-25 10:51:53.326056] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:46.365 [2024-07-25 10:51:53.326171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3501098 ] 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:46.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.365 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:46.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.366 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:46.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.366 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:46.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.366 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:46.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.366 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:46.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.366 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:46.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.366 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:46.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.366 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:46.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.366 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:46.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.366 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:46.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.366 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:46.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.366 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:46.366 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:46.366 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:46.624 [2024-07-25 10:51:53.548136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.883 [2024-07-25 10:51:53.830542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:47.142 10:51:54 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:49.708 10:51:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:49.709 10:51:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:49.709 10:51:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:49.709 10:51:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:49.709 10:51:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:09:49.709 10:51:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:09:49.709 10:51:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:09:49.709 10:51:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:09:49.709 10:51:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:49.709 10:51:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:49.709 10:51:56 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:49.709 00:09:49.709 real 0m3.219s 00:09:49.709 user 0m0.011s 00:09:49.709 sys 0m0.001s 00:09:49.709 10:51:56 accel.accel_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.709 10:51:56 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:09:49.709 ************************************ 00:09:49.709 END TEST accel_decomp 00:09:49.709 ************************************ 00:09:49.709 10:51:56 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:49.709 10:51:56 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:49.709 10:51:56 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.709 10:51:56 accel -- common/autotest_common.sh@10 -- # set +x 00:09:49.709 ************************************ 00:09:49.709 START TEST accel_decomp_full 00:09:49.709 ************************************ 00:09:49.709 10:51:56 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:09:49.709 10:51:56 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:09:49.709 [2024-07-25 10:51:56.618312] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:49.709 [2024-07-25 10:51:56.618424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3501681 ] 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:49.709 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:49.709 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:49.968 [2024-07-25 10:51:56.843791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.226 [2024-07-25 10:51:57.128368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:50.484 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:50.485 10:51:57 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:53.013 10:51:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:53.013 00:09:53.013 real 0m3.196s 00:09:53.013 user 0m0.009s 00:09:53.013 sys 0m0.003s 00:09:53.013 10:51:59 accel.accel_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.013 10:51:59 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:09:53.013 ************************************ 00:09:53.013 END TEST accel_decomp_full 00:09:53.013 ************************************ 00:09:53.013 10:51:59 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:53.013 10:51:59 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:53.013 10:51:59 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.013 10:51:59 accel -- common/autotest_common.sh@10 -- # set +x 00:09:53.013 ************************************ 00:09:53.013 START TEST accel_decomp_mcore 00:09:53.013 ************************************ 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:53.013 10:51:59 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:53.013 [2024-07-25 10:51:59.889802] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:53.014 [2024-07-25 10:51:59.889903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502229 ] 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:53.014 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:53.014 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:53.014 [2024-07-25 10:52:00.115976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.581 [2024-07-25 10:52:00.409052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.581 [2024-07-25 10:52:00.409126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.581 [2024-07-25 10:52:00.409193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.581 [2024-07-25 10:52:00.409200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:53.840 10:52:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:56.375 00:09:56.375 real 0m3.307s 00:09:56.375 user 0m0.026s 00:09:56.375 sys 0m0.007s 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.375 10:52:03 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:56.375 ************************************ 00:09:56.375 END TEST accel_decomp_mcore 00:09:56.375 ************************************ 00:09:56.375 10:52:03 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:56.375 10:52:03 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:09:56.375 10:52:03 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.375 10:52:03 accel -- common/autotest_common.sh@10 -- # set +x 00:09:56.375 ************************************ 00:09:56.375 START TEST accel_decomp_full_mcore 00:09:56.375 ************************************ 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:09:56.375 10:52:03 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:09:56.375 [2024-07-25 10:52:03.282628] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:56.375 [2024-07-25 10:52:03.282735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502782 ] 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:56.375 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:56.375 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:56.633 [2024-07-25 10:52:03.508322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.891 [2024-07-25 10:52:03.802595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.891 [2024-07-25 10:52:03.802669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.891 [2024-07-25 10:52:03.802778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.891 [2024-07-25 10:52:03.802786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.150 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:57.151 10:52:04 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:59.685 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:59.685 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:59.685 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:59.685 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:59.685 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:59.685 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:59.685 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:59.685 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:59.686 00:09:59.686 real 0m3.327s 00:09:59.686 user 0m9.517s 00:09:59.686 sys 0m0.340s 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.686 10:52:06 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:09:59.686 ************************************ 00:09:59.686 END TEST accel_decomp_full_mcore 00:09:59.686 ************************************ 00:09:59.686 10:52:06 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:59.686 10:52:06 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:09:59.686 10:52:06 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.686 10:52:06 accel -- common/autotest_common.sh@10 -- # set +x 00:09:59.686 ************************************ 00:09:59.686 START TEST accel_decomp_mthread 00:09:59.686 ************************************ 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:09:59.686 10:52:06 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:09:59.686 [2024-07-25 10:52:06.688364] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:59.686 [2024-07-25 10:52:06.688467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3503337 ] 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:01.0 cannot be used 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:01.1 cannot be used 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:01.2 cannot be used 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:01.3 cannot be used 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:01.4 cannot be used 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:01.5 cannot be used 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:01.6 cannot be used 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:01.7 cannot be used 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:02.0 cannot be used 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:02.1 cannot be used 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:02.2 cannot be used 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:02.3 cannot be used 00:09:59.945 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.945 EAL: Requested device 0000:3d:02.4 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3d:02.5 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3d:02.6 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3d:02.7 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:01.0 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:01.1 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:01.2 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:01.3 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:01.4 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:01.5 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:01.6 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:01.7 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:02.0 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:02.1 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:02.2 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:02.3 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:02.4 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:02.5 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:02.6 cannot be used 00:09:59.946 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:09:59.946 EAL: Requested device 0000:3f:02.7 cannot be used 00:09:59.946 [2024-07-25 10:52:06.911471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.205 [2024-07-25 10:52:07.196955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:00.464 10:52:07 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:02.999 00:10:02.999 real 0m3.195s 00:10:02.999 user 0m2.897s 00:10:02.999 sys 0m0.292s 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.999 10:52:09 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:02.999 ************************************ 00:10:02.999 END TEST accel_decomp_mthread 00:10:03.000 ************************************ 00:10:03.000 10:52:09 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:03.000 10:52:09 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:03.000 10:52:09 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.000 10:52:09 accel -- common/autotest_common.sh@10 -- # set +x 00:10:03.000 ************************************ 00:10:03.000 START TEST accel_decomp_full_mthread 00:10:03.000 ************************************ 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:03.000 10:52:09 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:03.000 [2024-07-25 10:52:09.965061] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:03.000 [2024-07-25 10:52:09.965168] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3503885 ] 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:03.000 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:03.000 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:03.259 [2024-07-25 10:52:10.190107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.518 [2024-07-25 10:52:10.465170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.777 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:03.778 10:52:10 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:06.346 00:10:06.346 real 0m3.250s 00:10:06.346 user 0m2.952s 00:10:06.346 sys 0m0.299s 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.346 10:52:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:06.346 ************************************ 00:10:06.346 END TEST accel_decomp_full_mthread 00:10:06.346 ************************************ 00:10:06.346 10:52:13 accel -- accel/accel.sh@124 -- # [[ y == y ]] 00:10:06.346 10:52:13 accel -- accel/accel.sh@125 -- # COMPRESSDEV=1 00:10:06.346 10:52:13 accel -- accel/accel.sh@126 -- # get_expected_opcs 00:10:06.346 10:52:13 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:06.346 10:52:13 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3504435 00:10:06.346 10:52:13 accel -- accel/accel.sh@63 -- # waitforlisten 3504435 00:10:06.346 10:52:13 accel -- common/autotest_common.sh@831 -- # '[' -z 3504435 ']' 00:10:06.346 10:52:13 accel -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.346 10:52:13 accel -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:06.346 10:52:13 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:10:06.346 10:52:13 accel -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.346 10:52:13 accel -- accel/accel.sh@61 -- # build_accel_config 00:10:06.346 10:52:13 accel -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:06.346 10:52:13 accel -- common/autotest_common.sh@10 -- # set +x 00:10:06.346 10:52:13 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:06.346 10:52:13 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:06.346 10:52:13 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:06.346 10:52:13 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:06.346 10:52:13 accel -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:06.346 10:52:13 accel -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:06.346 10:52:13 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:06.346 10:52:13 accel -- accel/accel.sh@41 -- # jq -r . 00:10:06.346 [2024-07-25 10:52:13.320809] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:06.346 [2024-07-25 10:52:13.320929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3504435 ] 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:06.346 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:06.346 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:06.610 [2024-07-25 10:52:13.545137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.867 [2024-07-25 10:52:13.821326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.241 [2024-07-25 10:52:15.190471] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:09.175 10:52:16 accel -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.175 10:52:16 accel -- common/autotest_common.sh@864 -- # return 0 00:10:09.175 10:52:16 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:10:09.175 10:52:16 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:10:09.175 10:52:16 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:10:09.175 10:52:16 accel -- accel/accel.sh@68 -- # [[ -n 1 ]] 00:10:09.175 10:52:16 accel -- accel/accel.sh@68 -- # check_save_config compressdev_scan_accel_module 00:10:09.176 10:52:16 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:10:09.176 10:52:16 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:10:09.176 10:52:16 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.176 10:52:16 accel -- common/autotest_common.sh@10 -- # set +x 00:10:09.176 10:52:16 accel -- accel/accel.sh@56 -- # grep compressdev_scan_accel_module 00:10:09.176 10:52:16 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.176 "method": "compressdev_scan_accel_module", 00:10:09.176 10:52:16 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:10:09.176 10:52:16 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:10:09.176 10:52:16 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:10:09.176 10:52:16 accel -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.176 10:52:16 accel -- common/autotest_common.sh@10 -- # set +x 00:10:09.176 10:52:16 accel -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.176 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.176 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.176 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dpdk_compressdev 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.434 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.434 10:52:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:10:09.434 10:52:16 accel -- accel/accel.sh@72 -- # IFS== 00:10:09.435 10:52:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:10:09.435 10:52:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:10:09.435 10:52:16 accel -- accel/accel.sh@75 -- # killprocess 3504435 00:10:09.435 10:52:16 accel -- common/autotest_common.sh@950 -- # '[' -z 3504435 ']' 00:10:09.435 10:52:16 accel -- common/autotest_common.sh@954 -- # kill -0 3504435 00:10:09.435 10:52:16 accel -- common/autotest_common.sh@955 -- # uname 00:10:09.435 10:52:16 accel -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.435 10:52:16 accel -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3504435 00:10:09.435 10:52:16 accel -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:09.435 10:52:16 accel -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:09.435 10:52:16 accel -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3504435' 00:10:09.435 killing process with pid 3504435 00:10:09.435 10:52:16 accel -- common/autotest_common.sh@969 -- # kill 3504435 00:10:09.435 10:52:16 accel -- common/autotest_common.sh@974 -- # wait 3504435 00:10:12.716 10:52:19 accel -- accel/accel.sh@76 -- # trap - ERR 00:10:12.716 10:52:19 accel -- accel/accel.sh@127 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:12.716 10:52:19 accel -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:12.716 10:52:19 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.716 10:52:19 accel -- common/autotest_common.sh@10 -- # set +x 00:10:12.716 ************************************ 00:10:12.716 START TEST accel_cdev_comp 00:10:12.716 ************************************ 00:10:12.716 10:52:19 accel.accel_cdev_comp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@17 -- # local accel_module 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:12.716 10:52:19 accel.accel_cdev_comp -- accel/accel.sh@41 -- # jq -r . 00:10:12.716 [2024-07-25 10:52:19.205195] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:12.716 [2024-07-25 10:52:19.205311] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3505499 ] 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:12.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:12.716 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:12.716 [2024-07-25 10:52:19.427203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.716 [2024-07-25 10:52:19.720128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.089 [2024-07-25 10:52:21.142866] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:14.089 [2024-07-25 10:52:21.145959] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=0x1 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.089 [2024-07-25 10:52:21.154313] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:14.089 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=compress 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=32 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=1 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val=No 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:14.090 10:52:21 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@20 -- # val= 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:15.992 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@19 -- # IFS=: 00:10:15.993 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@19 -- # read -r var val 00:10:15.993 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:15.993 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:15.993 10:52:22 accel.accel_cdev_comp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:15.993 00:10:15.993 real 0m3.767s 00:10:15.993 user 0m3.082s 00:10:15.993 sys 0m0.679s 00:10:15.993 10:52:22 accel.accel_cdev_comp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.993 10:52:22 accel.accel_cdev_comp -- common/autotest_common.sh@10 -- # set +x 00:10:15.993 ************************************ 00:10:15.993 END TEST accel_cdev_comp 00:10:15.993 ************************************ 00:10:15.993 10:52:22 accel -- accel/accel.sh@128 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:15.993 10:52:22 accel -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:10:15.993 10:52:22 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.993 10:52:22 accel -- common/autotest_common.sh@10 -- # set +x 00:10:15.993 ************************************ 00:10:15.993 START TEST accel_cdev_decomp 00:10:15.993 ************************************ 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.993 10:52:22 accel.accel_cdev_decomp -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:15.993 10:52:23 accel.accel_cdev_decomp -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:15.993 10:52:23 accel.accel_cdev_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:15.993 10:52:23 accel.accel_cdev_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:15.993 [2024-07-25 10:52:23.054268] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:15.993 [2024-07-25 10:52:23.054370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3506218 ] 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:16.252 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:16.252 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:16.252 [2024-07-25 10:52:23.278583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.509 [2024-07-25 10:52:23.557759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.882 [2024-07-25 10:52:24.922960] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:17.882 [2024-07-25 10:52:24.925993] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.882 [2024-07-25 10:52:24.934422] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.882 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=32 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=1 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:17.883 10:52:24 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@20 -- # val= 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:19.783 00:10:19.783 real 0m3.681s 00:10:19.783 user 0m3.029s 00:10:19.783 sys 0m0.649s 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.783 10:52:26 accel.accel_cdev_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:19.783 ************************************ 00:10:19.783 END TEST accel_cdev_decomp 00:10:19.783 ************************************ 00:10:19.783 10:52:26 accel -- accel/accel.sh@129 -- # run_test accel_cdev_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:19.783 10:52:26 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:10:19.783 10:52:26 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.783 10:52:26 accel -- common/autotest_common.sh@10 -- # set +x 00:10:19.783 ************************************ 00:10:19.783 START TEST accel_cdev_decomp_full 00:10:19.783 ************************************ 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:19.783 10:52:26 accel.accel_cdev_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:19.783 [2024-07-25 10:52:26.818660] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:19.784 [2024-07-25 10:52:26.818767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3506854 ] 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:20.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:20.042 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:20.042 [2024-07-25 10:52:27.044781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.300 [2024-07-25 10:52:27.336899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.674 [2024-07-25 10:52:28.745934] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:21.674 [2024-07-25 10:52:28.748993] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 [2024-07-25 10:52:28.756733] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.674 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.675 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:21.675 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.675 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.675 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:21.675 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:21.675 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:21.675 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:21.675 10:52:28 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@20 -- # val= 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:23.622 00:10:23.622 real 0m3.773s 00:10:23.622 user 0m3.091s 00:10:23.622 sys 0m0.679s 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.622 10:52:30 accel.accel_cdev_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:23.622 ************************************ 00:10:23.622 END TEST accel_cdev_decomp_full 00:10:23.622 ************************************ 00:10:23.622 10:52:30 accel -- accel/accel.sh@130 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.622 10:52:30 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:10:23.622 10:52:30 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.622 10:52:30 accel -- common/autotest_common.sh@10 -- # set +x 00:10:23.622 ************************************ 00:10:23.622 START TEST accel_cdev_decomp_mcore 00:10:23.622 ************************************ 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:23.622 10:52:30 accel.accel_cdev_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:23.622 [2024-07-25 10:52:30.661693] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:23.622 [2024-07-25 10:52:30.661795] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507464 ] 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:23.882 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:23.882 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:23.882 [2024-07-25 10:52:30.885512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.141 [2024-07-25 10:52:31.158232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.141 [2024-07-25 10:52:31.158308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.141 [2024-07-25 10:52:31.158373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.141 [2024-07-25 10:52:31.158380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.520 [2024-07-25 10:52:32.530458] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:25.520 [2024-07-25 10:52:32.533651] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d1a0 PMD being used: compress_qat 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 [2024-07-25 10:52:32.544020] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000010100 PMD being used: compress_qat 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 [2024-07-25 10:52:32.546252] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000017100 PMD being used: compress_qat 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:25.520 [2024-07-25 10:52:32.550391] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000020160 PMD being used: compress_qat 00:10:25.520 [2024-07-25 10:52:32.550582] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d280 PMD being used: compress_qat 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.520 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.521 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:25.521 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.521 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.521 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.521 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.521 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.521 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.521 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:25.521 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:25.521 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:25.521 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:25.521 10:52:32 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.423 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:27.424 00:10:27.424 real 0m3.878s 00:10:27.424 user 0m11.413s 00:10:27.424 sys 0m0.679s 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.424 10:52:34 accel.accel_cdev_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:27.424 ************************************ 00:10:27.424 END TEST accel_cdev_decomp_mcore 00:10:27.424 ************************************ 00:10:27.424 10:52:34 accel -- accel/accel.sh@131 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:27.424 10:52:34 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:27.424 10:52:34 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.424 10:52:34 accel -- common/autotest_common.sh@10 -- # set +x 00:10:27.683 ************************************ 00:10:27.683 START TEST accel_cdev_decomp_full_mcore 00:10:27.683 ************************************ 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:27.683 10:52:34 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:27.683 [2024-07-25 10:52:34.641113] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:27.683 [2024-07-25 10:52:34.641234] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3508211 ] 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:27.683 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.683 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:27.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.684 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:27.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.684 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:27.684 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:27.684 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:27.942 [2024-07-25 10:52:34.872388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.201 [2024-07-25 10:52:35.161021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.201 [2024-07-25 10:52:35.161094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.201 [2024-07-25 10:52:35.161169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.201 [2024-07-25 10:52:35.161176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.579 [2024-07-25 10:52:36.509974] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:29.579 [2024-07-25 10:52:36.513150] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d1a0 PMD being used: compress_qat 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:29.579 [2024-07-25 10:52:36.522538] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000010100 PMD being used: compress_qat 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:29.579 [2024-07-25 10:52:36.524575] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000017100 PMD being used: compress_qat 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 [2024-07-25 10:52:36.528544] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000020160 PMD being used: compress_qat 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:29.579 [2024-07-25 10:52:36.528794] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00002d280 PMD being used: compress_qat 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.579 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:29.580 10:52:36 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:31.485 00:10:31.485 real 0m3.871s 00:10:31.485 user 0m0.028s 00:10:31.485 sys 0m0.003s 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.485 10:52:38 accel.accel_cdev_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:31.485 ************************************ 00:10:31.485 END TEST accel_cdev_decomp_full_mcore 00:10:31.485 ************************************ 00:10:31.485 10:52:38 accel -- accel/accel.sh@132 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:31.485 10:52:38 accel -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:10:31.485 10:52:38 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.485 10:52:38 accel -- common/autotest_common.sh@10 -- # set +x 00:10:31.485 ************************************ 00:10:31.485 START TEST accel_cdev_decomp_mthread 00:10:31.485 ************************************ 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:31.485 10:52:38 accel.accel_cdev_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:31.485 [2024-07-25 10:52:38.592108] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:31.485 [2024-07-25 10:52:38.592220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3508859 ] 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:31.745 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:31.745 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:31.745 [2024-07-25 10:52:38.819644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.004 [2024-07-25 10:52:39.109989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.910 [2024-07-25 10:52:40.535048] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:33.910 [2024-07-25 10:52:40.538234] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:33.910 [2024-07-25 10:52:40.548775] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.910 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.911 [2024-07-25 10:52:40.552934] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016300 PMD being used: compress_qat 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:33.911 10:52:40 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:35.288 00:10:35.288 real 0m3.761s 00:10:35.288 user 0m3.071s 00:10:35.288 sys 0m0.686s 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.288 10:52:42 accel.accel_cdev_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:35.288 ************************************ 00:10:35.288 END TEST accel_cdev_decomp_mthread 00:10:35.288 ************************************ 00:10:35.288 10:52:42 accel -- accel/accel.sh@133 -- # run_test accel_cdev_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:35.288 10:52:42 accel -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:10:35.288 10:52:42 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.288 10:52:42 accel -- common/autotest_common.sh@10 -- # set +x 00:10:35.288 ************************************ 00:10:35.288 START TEST accel_cdev_decomp_full_mthread 00:10:35.288 ************************************ 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1125 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n 1 ]] 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@37 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:35.288 10:52:42 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:35.548 [2024-07-25 10:52:42.435076] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:35.548 [2024-07-25 10:52:42.435187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3509573 ] 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:35.548 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.548 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:35.549 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:35.549 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:35.549 [2024-07-25 10:52:42.658908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.115 [2024-07-25 10:52:42.944486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.492 [2024-07-25 10:52:44.355203] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:10:37.492 [2024-07-25 10:52:44.358238] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016140 PMD being used: compress_qat 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:37.492 [2024-07-25 10:52:44.368406] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016220 PMD being used: compress_qat 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=dpdk_compressdev 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=dpdk_compressdev 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:37.492 [2024-07-25 10:52:44.377689] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000016300 PMD being used: compress_qat 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:37.492 10:52:44 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n dpdk_compressdev ]] 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- accel/accel.sh@27 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:10:39.421 00:10:39.421 real 0m3.774s 00:10:39.421 user 0m3.123s 00:10:39.421 sys 0m0.646s 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.421 10:52:46 accel.accel_cdev_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:39.421 ************************************ 00:10:39.421 END TEST accel_cdev_decomp_full_mthread 00:10:39.421 ************************************ 00:10:39.421 10:52:46 accel -- accel/accel.sh@134 -- # unset COMPRESSDEV 00:10:39.421 10:52:46 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:39.421 10:52:46 accel -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:39.421 10:52:46 accel -- accel/accel.sh@137 -- # build_accel_config 00:10:39.421 10:52:46 accel -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.421 10:52:46 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:39.421 10:52:46 accel -- common/autotest_common.sh@10 -- # set +x 00:10:39.421 10:52:46 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:39.421 10:52:46 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:39.421 10:52:46 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:39.421 10:52:46 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:39.421 10:52:46 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:39.421 10:52:46 accel -- accel/accel.sh@41 -- # jq -r . 00:10:39.421 ************************************ 00:10:39.421 START TEST accel_dif_functional_tests 00:10:39.421 ************************************ 00:10:39.421 10:52:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:39.421 [2024-07-25 10:52:46.338615] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:39.421 [2024-07-25 10:52:46.338722] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510150 ] 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:39.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.421 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.422 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.422 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.422 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.422 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.422 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.422 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.422 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.422 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.422 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.422 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.422 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:39.422 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:39.422 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:39.681 [2024-07-25 10:52:46.564847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:39.983 [2024-07-25 10:52:46.854555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.983 [2024-07-25 10:52:46.854622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.983 [2024-07-25 10:52:46.854627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.263 00:10:40.263 00:10:40.263 CUnit - A unit testing framework for C - Version 2.1-3 00:10:40.263 http://cunit.sourceforge.net/ 00:10:40.263 00:10:40.263 00:10:40.263 Suite: accel_dif 00:10:40.263 Test: verify: DIF generated, GUARD check ...passed 00:10:40.263 Test: verify: DIF generated, APPTAG check ...passed 00:10:40.263 Test: verify: DIF generated, REFTAG check ...passed 00:10:40.263 Test: verify: DIF not generated, GUARD check ...[2024-07-25 10:52:47.318555] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:40.263 passed 00:10:40.263 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 10:52:47.318652] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:40.263 passed 00:10:40.263 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 10:52:47.318705] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:40.263 passed 00:10:40.263 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:40.263 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 10:52:47.318800] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:40.263 passed 00:10:40.263 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:40.263 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:40.263 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:40.263 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 10:52:47.319006] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:40.263 passed 00:10:40.263 Test: verify copy: DIF generated, GUARD check ...passed 00:10:40.263 Test: verify copy: DIF generated, APPTAG check ...passed 00:10:40.263 Test: verify copy: DIF generated, REFTAG check ...passed 00:10:40.263 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 10:52:47.319276] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:40.263 passed 00:10:40.263 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 10:52:47.319338] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:40.263 passed 00:10:40.263 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 10:52:47.319399] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:40.263 passed 00:10:40.263 Test: generate copy: DIF generated, GUARD check ...passed 00:10:40.263 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:40.263 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:40.263 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:40.263 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:40.263 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:40.263 Test: generate copy: iovecs-len validate ...[2024-07-25 10:52:47.319779] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:40.263 passed 00:10:40.263 Test: generate copy: buffer alignment validate ...passed 00:10:40.263 00:10:40.263 Run Summary: Type Total Ran Passed Failed Inactive 00:10:40.263 suites 1 1 n/a 0 0 00:10:40.263 tests 26 26 26 0 0 00:10:40.263 asserts 115 115 115 0 n/a 00:10:40.263 00:10:40.263 Elapsed time = 0.003 seconds 00:10:42.168 00:10:42.168 real 0m2.761s 00:10:42.168 user 0m5.522s 00:10:42.169 sys 0m0.373s 00:10:42.169 10:52:48 accel.accel_dif_functional_tests -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.169 10:52:48 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:10:42.169 ************************************ 00:10:42.169 END TEST accel_dif_functional_tests 00:10:42.169 ************************************ 00:10:42.169 00:10:42.169 real 1m52.625s 00:10:42.169 user 2m11.254s 00:10:42.169 sys 0m15.621s 00:10:42.169 10:52:49 accel -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.169 10:52:49 accel -- common/autotest_common.sh@10 -- # set +x 00:10:42.169 ************************************ 00:10:42.169 END TEST accel 00:10:42.169 ************************************ 00:10:42.169 10:52:49 -- spdk/autotest.sh@186 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:42.169 10:52:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:42.169 10:52:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.169 10:52:49 -- common/autotest_common.sh@10 -- # set +x 00:10:42.169 ************************************ 00:10:42.169 START TEST accel_rpc 00:10:42.169 ************************************ 00:10:42.169 10:52:49 accel_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:42.169 * Looking for test storage... 00:10:42.169 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:10:42.169 10:52:49 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:42.169 10:52:49 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3510722 00:10:42.169 10:52:49 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3510722 00:10:42.169 10:52:49 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:42.169 10:52:49 accel_rpc -- common/autotest_common.sh@831 -- # '[' -z 3510722 ']' 00:10:42.169 10:52:49 accel_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.169 10:52:49 accel_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.169 10:52:49 accel_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.169 10:52:49 accel_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.169 10:52:49 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.428 [2024-07-25 10:52:49.359492] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:42.428 [2024-07-25 10:52:49.359608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510722 ] 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:42.428 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.428 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:42.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.429 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:42.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.429 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:42.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.429 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:42.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.429 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:42.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.429 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:42.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.429 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:42.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.429 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:42.429 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:42.429 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:42.688 [2024-07-25 10:52:49.585640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.946 [2024-07-25 10:52:49.864587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.206 10:52:50 accel_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.206 10:52:50 accel_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:43.206 10:52:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:43.206 10:52:50 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:43.206 10:52:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:43.206 10:52:50 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:43.206 10:52:50 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:43.206 10:52:50 accel_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:43.206 10:52:50 accel_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.206 10:52:50 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.206 ************************************ 00:10:43.206 START TEST accel_assign_opcode 00:10:43.206 ************************************ 00:10:43.206 10:52:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # accel_assign_opcode_test_suite 00:10:43.206 10:52:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:43.206 10:52:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.206 10:52:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:43.206 [2024-07-25 10:52:50.238419] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:43.206 10:52:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.206 10:52:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:43.206 10:52:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.206 10:52:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:43.206 [2024-07-25 10:52:50.246391] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:43.206 10:52:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.206 10:52:50 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:43.206 10:52:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.206 10:52:50 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:44.583 10:52:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.583 10:52:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:44.583 10:52:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.583 10:52:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:44.583 10:52:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:44.583 10:52:51 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:10:44.583 10:52:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.583 software 00:10:44.583 00:10:44.583 real 0m1.286s 00:10:44.583 user 0m0.043s 00:10:44.583 sys 0m0.009s 00:10:44.583 10:52:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:44.583 10:52:51 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:44.583 ************************************ 00:10:44.583 END TEST accel_assign_opcode 00:10:44.583 ************************************ 00:10:44.583 10:52:51 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3510722 00:10:44.583 10:52:51 accel_rpc -- common/autotest_common.sh@950 -- # '[' -z 3510722 ']' 00:10:44.583 10:52:51 accel_rpc -- common/autotest_common.sh@954 -- # kill -0 3510722 00:10:44.583 10:52:51 accel_rpc -- common/autotest_common.sh@955 -- # uname 00:10:44.583 10:52:51 accel_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:44.583 10:52:51 accel_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3510722 00:10:44.583 10:52:51 accel_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:44.583 10:52:51 accel_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:44.583 10:52:51 accel_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3510722' 00:10:44.583 killing process with pid 3510722 00:10:44.583 10:52:51 accel_rpc -- common/autotest_common.sh@969 -- # kill 3510722 00:10:44.583 10:52:51 accel_rpc -- common/autotest_common.sh@974 -- # wait 3510722 00:10:47.871 00:10:47.871 real 0m5.800s 00:10:47.871 user 0m5.630s 00:10:47.871 sys 0m0.769s 00:10:47.871 10:52:54 accel_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:47.871 10:52:54 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.871 ************************************ 00:10:47.871 END TEST accel_rpc 00:10:47.871 ************************************ 00:10:47.871 10:52:54 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:10:47.871 10:52:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:47.871 10:52:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:47.871 10:52:54 -- common/autotest_common.sh@10 -- # set +x 00:10:48.130 ************************************ 00:10:48.130 START TEST app_cmdline 00:10:48.130 ************************************ 00:10:48.130 10:52:55 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:10:48.130 * Looking for test storage... 00:10:48.130 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:10:48.130 10:52:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:48.130 10:52:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3511845 00:10:48.130 10:52:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3511845 00:10:48.130 10:52:55 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:48.130 10:52:55 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3511845 ']' 00:10:48.130 10:52:55 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.130 10:52:55 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:48.130 10:52:55 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.130 10:52:55 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:48.130 10:52:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:48.130 [2024-07-25 10:52:55.235368] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:48.130 [2024-07-25 10:52:55.235493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511845 ] 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:48.390 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:48.390 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:48.390 [2024-07-25 10:52:55.459470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.650 [2024-07-25 10:52:55.753156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.028 10:52:57 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.028 10:52:57 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:10:50.028 10:52:57 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:50.287 { 00:10:50.287 "version": "SPDK v24.09-pre git sha1 704257090", 00:10:50.287 "fields": { 00:10:50.287 "major": 24, 00:10:50.287 "minor": 9, 00:10:50.287 "patch": 0, 00:10:50.287 "suffix": "-pre", 00:10:50.287 "commit": "704257090" 00:10:50.287 } 00:10:50.287 } 00:10:50.287 10:52:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:50.287 10:52:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:50.287 10:52:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:50.287 10:52:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:50.287 10:52:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:50.287 10:52:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:50.287 10:52:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.287 10:52:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:50.287 10:52:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:50.287 10:52:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:50.287 10:52:57 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:50.287 request: 00:10:50.287 { 00:10:50.287 "method": "env_dpdk_get_mem_stats", 00:10:50.287 "req_id": 1 00:10:50.287 } 00:10:50.287 Got JSON-RPC error response 00:10:50.287 response: 00:10:50.287 { 00:10:50.287 "code": -32601, 00:10:50.287 "message": "Method not found" 00:10:50.287 } 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:50.546 10:52:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3511845 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3511845 ']' 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3511845 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3511845 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3511845' 00:10:50.546 killing process with pid 3511845 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@969 -- # kill 3511845 00:10:50.546 10:52:57 app_cmdline -- common/autotest_common.sh@974 -- # wait 3511845 00:10:53.837 00:10:53.837 real 0m5.775s 00:10:53.837 user 0m5.838s 00:10:53.837 sys 0m0.742s 00:10:53.837 10:53:00 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.837 10:53:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:53.837 ************************************ 00:10:53.837 END TEST app_cmdline 00:10:53.837 ************************************ 00:10:53.837 10:53:00 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:10:53.837 10:53:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:53.837 10:53:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:53.837 10:53:00 -- common/autotest_common.sh@10 -- # set +x 00:10:53.837 ************************************ 00:10:53.837 START TEST version 00:10:53.837 ************************************ 00:10:53.837 10:53:00 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:10:54.096 * Looking for test storage... 00:10:54.096 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:10:54.096 10:53:00 version -- app/version.sh@17 -- # get_header_version major 00:10:54.097 10:53:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:10:54.097 10:53:00 version -- app/version.sh@14 -- # cut -f2 00:10:54.097 10:53:00 version -- app/version.sh@14 -- # tr -d '"' 00:10:54.097 10:53:00 version -- app/version.sh@17 -- # major=24 00:10:54.097 10:53:00 version -- app/version.sh@18 -- # get_header_version minor 00:10:54.097 10:53:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:10:54.097 10:53:00 version -- app/version.sh@14 -- # tr -d '"' 00:10:54.097 10:53:00 version -- app/version.sh@14 -- # cut -f2 00:10:54.097 10:53:00 version -- app/version.sh@18 -- # minor=9 00:10:54.097 10:53:00 version -- app/version.sh@19 -- # get_header_version patch 00:10:54.097 10:53:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:10:54.097 10:53:00 version -- app/version.sh@14 -- # cut -f2 00:10:54.097 10:53:00 version -- app/version.sh@14 -- # tr -d '"' 00:10:54.097 10:53:00 version -- app/version.sh@19 -- # patch=0 00:10:54.097 10:53:00 version -- app/version.sh@20 -- # get_header_version suffix 00:10:54.097 10:53:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:10:54.097 10:53:00 version -- app/version.sh@14 -- # tr -d '"' 00:10:54.097 10:53:00 version -- app/version.sh@14 -- # cut -f2 00:10:54.097 10:53:01 version -- app/version.sh@20 -- # suffix=-pre 00:10:54.097 10:53:01 version -- app/version.sh@22 -- # version=24.9 00:10:54.097 10:53:01 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:54.097 10:53:01 version -- app/version.sh@28 -- # version=24.9rc0 00:10:54.097 10:53:01 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:10:54.097 10:53:01 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:54.097 10:53:01 version -- app/version.sh@30 -- # py_version=24.9rc0 00:10:54.097 10:53:01 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:10:54.097 00:10:54.097 real 0m0.193s 00:10:54.097 user 0m0.086s 00:10:54.097 sys 0m0.153s 00:10:54.097 10:53:01 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.097 10:53:01 version -- common/autotest_common.sh@10 -- # set +x 00:10:54.097 ************************************ 00:10:54.097 END TEST version 00:10:54.097 ************************************ 00:10:54.097 10:53:01 -- spdk/autotest.sh@192 -- # '[' 1 -eq 1 ']' 00:10:54.097 10:53:01 -- spdk/autotest.sh@193 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:10:54.097 10:53:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:54.097 10:53:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.097 10:53:01 -- common/autotest_common.sh@10 -- # set +x 00:10:54.097 ************************************ 00:10:54.097 START TEST blockdev_general 00:10:54.097 ************************************ 00:10:54.097 10:53:01 blockdev_general -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:10:54.356 * Looking for test storage... 00:10:54.356 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:10:54.356 10:53:01 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device= 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@683 -- # dek= 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx= 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]] 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3513011 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:10:54.356 10:53:01 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 3513011 00:10:54.357 10:53:01 blockdev_general -- common/autotest_common.sh@831 -- # '[' -z 3513011 ']' 00:10:54.357 10:53:01 blockdev_general -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.357 10:53:01 blockdev_general -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.357 10:53:01 blockdev_general -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.357 10:53:01 blockdev_general -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.357 10:53:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:54.357 [2024-07-25 10:53:01.368841] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:54.357 [2024-07-25 10:53:01.368961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513011 ] 00:10:54.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.615 EAL: Requested device 0000:3d:01.0 cannot be used 00:10:54.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.615 EAL: Requested device 0000:3d:01.1 cannot be used 00:10:54.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.615 EAL: Requested device 0000:3d:01.2 cannot be used 00:10:54.615 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:01.3 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:01.4 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:01.5 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:01.6 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:01.7 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:02.0 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:02.1 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:02.2 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:02.3 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:02.4 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:02.5 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:02.6 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3d:02.7 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:01.0 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:01.1 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:01.2 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:01.3 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:01.4 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:01.5 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:01.6 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:01.7 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:02.0 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:02.1 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:02.2 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:02.3 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:02.4 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:02.5 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:02.6 cannot be used 00:10:54.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:10:54.616 EAL: Requested device 0000:3f:02.7 cannot be used 00:10:54.616 [2024-07-25 10:53:01.582133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.874 [2024-07-25 10:53:01.865183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.133 10:53:02 blockdev_general -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.133 10:53:02 blockdev_general -- common/autotest_common.sh@864 -- # return 0 00:10:55.133 10:53:02 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:10:55.133 10:53:02 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf 00:10:55.133 10:53:02 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:10:55.133 10:53:02 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.133 10:53:02 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:56.510 [2024-07-25 10:53:03.425810] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:56.510 [2024-07-25 10:53:03.425872] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:10:56.510 00:10:56.510 [2024-07-25 10:53:03.433784] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:56.510 [2024-07-25 10:53:03.433823] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:10:56.510 00:10:56.510 Malloc0 00:10:56.510 Malloc1 00:10:56.510 Malloc2 00:10:56.769 Malloc3 00:10:56.769 Malloc4 00:10:56.769 Malloc5 00:10:56.769 Malloc6 00:10:57.028 Malloc7 00:10:57.028 Malloc8 00:10:57.028 Malloc9 00:10:57.028 [2024-07-25 10:53:04.032541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:57.028 [2024-07-25 10:53:04.032602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.028 [2024-07-25 10:53:04.032630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045080 00:10:57.028 [2024-07-25 10:53:04.032650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.028 [2024-07-25 10:53:04.035344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.028 [2024-07-25 10:53:04.035376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:10:57.028 TestPT 00:10:57.028 10:53:04 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.028 10:53:04 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:10:57.028 5000+0 records in 00:10:57.028 5000+0 records out 00:10:57.028 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0267956 s, 382 MB/s 00:10:57.028 10:53:04 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:10:57.028 10:53:04 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.028 10:53:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:57.028 AIO0 00:10:57.028 10:53:04 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.028 10:53:04 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:10:57.028 10:53:04 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.028 10:53:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:57.028 10:53:04 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.028 10:53:04 blockdev_general -- bdev/blockdev.sh@739 -- # cat 00:10:57.028 10:53:04 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:10:57.028 10:53:04 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.028 10:53:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:57.365 10:53:04 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.365 10:53:04 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:10:57.365 10:53:04 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.365 10:53:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:57.365 10:53:04 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.365 10:53:04 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:57.365 10:53:04 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.365 10:53:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:57.365 10:53:04 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.365 10:53:04 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:10:57.365 10:53:04 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:10:57.365 10:53:04 blockdev_general -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.365 10:53:04 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:10:57.365 10:53:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:10:57.365 10:53:04 blockdev_general -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.365 10:53:04 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:10:57.365 10:53:04 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name 00:10:57.367 10:53:04 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "48c785e8-73be-4058-a5bc-d65927ffc9ab"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "48c785e8-73be-4058-a5bc-d65927ffc9ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "4c486317-f141-551d-a7b1-ee46b02c8702"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4c486317-f141-551d-a7b1-ee46b02c8702",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "29676677-c435-5077-aec0-f1dc507d0bf6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "29676677-c435-5077-aec0-f1dc507d0bf6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "8acff103-4115-5953-a165-7533217418e7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8acff103-4115-5953-a165-7533217418e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "fd1229df-8770-50ce-986b-501b51bb8807"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fd1229df-8770-50ce-986b-501b51bb8807",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f038e4ee-9531-5903-9264-ad813f8c6f45"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f038e4ee-9531-5903-9264-ad813f8c6f45",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "1a0f0e80-201f-5b58-a22c-e608ecb25d1e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1a0f0e80-201f-5b58-a22c-e608ecb25d1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "803b368a-7019-5edb-a011-a3dfbecc528a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "803b368a-7019-5edb-a011-a3dfbecc528a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "874dd9e9-e948-5810-8698-728f69d16eee"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "874dd9e9-e948-5810-8698-728f69d16eee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "53c06951-3d4e-5f0d-8162-a369593f8de2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "53c06951-3d4e-5f0d-8162-a369593f8de2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "f1bbb12f-d2fc-5d2d-b8bc-cbef6983a658"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f1bbb12f-d2fc-5d2d-b8bc-cbef6983a658",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "15f8b0de-7e5e-5ecb-bf60-91f623791d50"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "15f8b0de-7e5e-5ecb-bf60-91f623791d50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "928e9527-ca4d-4c7c-9fa7-80fe7b36d02c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "928e9527-ca4d-4c7c-9fa7-80fe7b36d02c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "928e9527-ca4d-4c7c-9fa7-80fe7b36d02c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "f2319c57-6c51-4f84-ae9d-332b36f69832",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "4e36abdb-0402-4bd2-a443-f6151dcc5e57",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "65e17940-7cfd-4572-a9b7-6d8e948463de"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "65e17940-7cfd-4572-a9b7-6d8e948463de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "65e17940-7cfd-4572-a9b7-6d8e948463de",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "97253799-5b6d-4f5b-b721-2d2bb1dfcd92",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "5b9c2b5e-39a4-42e0-afc1-008e1b24906b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "89f0e264-b5dd-4f84-8b33-49429b82ed91"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "89f0e264-b5dd-4f84-8b33-49429b82ed91",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "89f0e264-b5dd-4f84-8b33-49429b82ed91",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "27c805ed-92cf-4680-8319-70d39a03ea8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "ecc94859-0111-430d-accc-948d3eac049a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "0e4695f5-1296-4e7b-a0fc-52e48de70e63"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "0e4695f5-1296-4e7b-a0fc-52e48de70e63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:10:57.626 10:53:04 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:10:57.626 10:53:04 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0 00:10:57.626 10:53:04 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:10:57.626 10:53:04 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 3513011 00:10:57.626 10:53:04 blockdev_general -- common/autotest_common.sh@950 -- # '[' -z 3513011 ']' 00:10:57.626 10:53:04 blockdev_general -- common/autotest_common.sh@954 -- # kill -0 3513011 00:10:57.626 10:53:04 blockdev_general -- common/autotest_common.sh@955 -- # uname 00:10:57.626 10:53:04 blockdev_general -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:57.626 10:53:04 blockdev_general -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3513011 00:10:57.626 10:53:04 blockdev_general -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:57.626 10:53:04 blockdev_general -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:57.626 10:53:04 blockdev_general -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3513011' 00:10:57.626 killing process with pid 3513011 00:10:57.626 10:53:04 blockdev_general -- common/autotest_common.sh@969 -- # kill 3513011 00:10:57.626 10:53:04 blockdev_general -- common/autotest_common.sh@974 -- # wait 3513011 00:11:02.898 10:53:09 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:02.898 10:53:09 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:02.898 10:53:09 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:02.898 10:53:09 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.898 10:53:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:02.898 ************************************ 00:11:02.898 START TEST bdev_hello_world 00:11:02.898 ************************************ 00:11:02.898 10:53:09 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:02.898 [2024-07-25 10:53:09.224043] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:02.898 [2024-07-25 10:53:09.224159] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3514346 ] 00:11:02.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.898 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:02.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.898 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:02.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.898 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:02.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.898 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:02.898 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.898 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:02.899 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:02.899 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:02.899 [2024-07-25 10:53:09.448787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.899 [2024-07-25 10:53:09.721251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.466 [2024-07-25 10:53:10.316255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:03.466 [2024-07-25 10:53:10.316336] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:03.466 [2024-07-25 10:53:10.316360] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:03.466 [2024-07-25 10:53:10.324224] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:03.466 [2024-07-25 10:53:10.324268] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:03.466 [2024-07-25 10:53:10.332227] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:03.466 [2024-07-25 10:53:10.332265] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:03.466 [2024-07-25 10:53:10.573338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:03.466 [2024-07-25 10:53:10.573401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.466 [2024-07-25 10:53:10.573422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:11:03.466 [2024-07-25 10:53:10.573438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.466 [2024-07-25 10:53:10.576152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.466 [2024-07-25 10:53:10.576189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:04.034 [2024-07-25 10:53:11.015196] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:04.034 [2024-07-25 10:53:11.015277] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:04.034 [2024-07-25 10:53:11.015348] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:04.034 [2024-07-25 10:53:11.015445] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:04.034 [2024-07-25 10:53:11.015545] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:04.034 [2024-07-25 10:53:11.015585] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:04.034 [2024-07-25 10:53:11.015676] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:04.034 00:11:04.034 [2024-07-25 10:53:11.015730] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:07.334 00:11:07.334 real 0m4.963s 00:11:07.334 user 0m4.405s 00:11:07.334 sys 0m0.498s 00:11:07.334 10:53:14 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:07.334 10:53:14 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 ************************************ 00:11:07.334 END TEST bdev_hello_world 00:11:07.334 ************************************ 00:11:07.334 10:53:14 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:11:07.334 10:53:14 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:07.334 10:53:14 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.334 10:53:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 ************************************ 00:11:07.334 START TEST bdev_bounds 00:11:07.334 ************************************ 00:11:07.334 10:53:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:11:07.334 10:53:14 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=3515161 00:11:07.334 10:53:14 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:07.334 10:53:14 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 3515161' 00:11:07.334 Process bdevio pid: 3515161 00:11:07.334 10:53:14 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 3515161 00:11:07.334 10:53:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 3515161 ']' 00:11:07.334 10:53:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.334 10:53:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.334 10:53:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.334 10:53:14 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:11:07.334 10:53:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.334 10:53:14 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 [2024-07-25 10:53:14.266396] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:07.334 [2024-07-25 10:53:14.266513] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3515161 ] 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:07.334 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.334 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:07.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.335 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:07.335 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:07.335 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:07.593 [2024-07-25 10:53:14.491183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:07.852 [2024-07-25 10:53:14.772287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.852 [2024-07-25 10:53:14.772360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.852 [2024-07-25 10:53:14.772362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.419 [2024-07-25 10:53:15.306650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:08.419 [2024-07-25 10:53:15.306715] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:08.419 [2024-07-25 10:53:15.306733] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:08.419 [2024-07-25 10:53:15.314659] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:08.419 [2024-07-25 10:53:15.314702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:08.419 [2024-07-25 10:53:15.322665] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:08.419 [2024-07-25 10:53:15.322702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:08.677 [2024-07-25 10:53:15.558402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:08.677 [2024-07-25 10:53:15.558471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.677 [2024-07-25 10:53:15.558492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:11:08.677 [2024-07-25 10:53:15.558507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.677 [2024-07-25 10:53:15.561316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.677 [2024-07-25 10:53:15.561350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:08.936 10:53:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.936 10:53:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:11:08.936 10:53:16 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:09.194 I/O targets: 00:11:09.194 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:09.195 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:09.195 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:09.195 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:09.195 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:09.195 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:09.195 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:09.195 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:09.195 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:09.195 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:09.195 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:09.195 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:09.195 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:09.195 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:09.195 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:09.195 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:09.195 00:11:09.195 00:11:09.195 CUnit - A unit testing framework for C - Version 2.1-3 00:11:09.195 http://cunit.sourceforge.net/ 00:11:09.195 00:11:09.195 00:11:09.195 Suite: bdevio tests on: AIO0 00:11:09.195 Test: blockdev write read block ...passed 00:11:09.195 Test: blockdev write zeroes read block ...passed 00:11:09.195 Test: blockdev write zeroes read no split ...passed 00:11:09.195 Test: blockdev write zeroes read split ...passed 00:11:09.195 Test: blockdev write zeroes read split partial ...passed 00:11:09.195 Test: blockdev reset ...passed 00:11:09.195 Test: blockdev write read 8 blocks ...passed 00:11:09.195 Test: blockdev write read size > 128k ...passed 00:11:09.195 Test: blockdev write read invalid size ...passed 00:11:09.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.195 Test: blockdev write read max offset ...passed 00:11:09.195 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.195 Test: blockdev writev readv 8 blocks ...passed 00:11:09.195 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.195 Test: blockdev writev readv block ...passed 00:11:09.195 Test: blockdev writev readv size > 128k ...passed 00:11:09.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.195 Test: blockdev comparev and writev ...passed 00:11:09.195 Test: blockdev nvme passthru rw ...passed 00:11:09.195 Test: blockdev nvme passthru vendor specific ...passed 00:11:09.195 Test: blockdev nvme admin passthru ...passed 00:11:09.195 Test: blockdev copy ...passed 00:11:09.195 Suite: bdevio tests on: raid1 00:11:09.195 Test: blockdev write read block ...passed 00:11:09.195 Test: blockdev write zeroes read block ...passed 00:11:09.195 Test: blockdev write zeroes read no split ...passed 00:11:09.195 Test: blockdev write zeroes read split ...passed 00:11:09.195 Test: blockdev write zeroes read split partial ...passed 00:11:09.195 Test: blockdev reset ...passed 00:11:09.195 Test: blockdev write read 8 blocks ...passed 00:11:09.195 Test: blockdev write read size > 128k ...passed 00:11:09.195 Test: blockdev write read invalid size ...passed 00:11:09.195 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.195 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.195 Test: blockdev write read max offset ...passed 00:11:09.195 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.195 Test: blockdev writev readv 8 blocks ...passed 00:11:09.195 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.195 Test: blockdev writev readv block ...passed 00:11:09.195 Test: blockdev writev readv size > 128k ...passed 00:11:09.195 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.195 Test: blockdev comparev and writev ...passed 00:11:09.195 Test: blockdev nvme passthru rw ...passed 00:11:09.195 Test: blockdev nvme passthru vendor specific ...passed 00:11:09.195 Test: blockdev nvme admin passthru ...passed 00:11:09.195 Test: blockdev copy ...passed 00:11:09.195 Suite: bdevio tests on: concat0 00:11:09.195 Test: blockdev write read block ...passed 00:11:09.195 Test: blockdev write zeroes read block ...passed 00:11:09.195 Test: blockdev write zeroes read no split ...passed 00:11:09.454 Test: blockdev write zeroes read split ...passed 00:11:09.454 Test: blockdev write zeroes read split partial ...passed 00:11:09.454 Test: blockdev reset ...passed 00:11:09.454 Test: blockdev write read 8 blocks ...passed 00:11:09.454 Test: blockdev write read size > 128k ...passed 00:11:09.454 Test: blockdev write read invalid size ...passed 00:11:09.454 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.454 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.454 Test: blockdev write read max offset ...passed 00:11:09.454 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.454 Test: blockdev writev readv 8 blocks ...passed 00:11:09.454 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.454 Test: blockdev writev readv block ...passed 00:11:09.454 Test: blockdev writev readv size > 128k ...passed 00:11:09.454 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.454 Test: blockdev comparev and writev ...passed 00:11:09.454 Test: blockdev nvme passthru rw ...passed 00:11:09.454 Test: blockdev nvme passthru vendor specific ...passed 00:11:09.454 Test: blockdev nvme admin passthru ...passed 00:11:09.454 Test: blockdev copy ...passed 00:11:09.454 Suite: bdevio tests on: raid0 00:11:09.454 Test: blockdev write read block ...passed 00:11:09.454 Test: blockdev write zeroes read block ...passed 00:11:09.454 Test: blockdev write zeroes read no split ...passed 00:11:09.454 Test: blockdev write zeroes read split ...passed 00:11:09.454 Test: blockdev write zeroes read split partial ...passed 00:11:09.454 Test: blockdev reset ...passed 00:11:09.454 Test: blockdev write read 8 blocks ...passed 00:11:09.454 Test: blockdev write read size > 128k ...passed 00:11:09.454 Test: blockdev write read invalid size ...passed 00:11:09.454 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.454 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.454 Test: blockdev write read max offset ...passed 00:11:09.454 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.454 Test: blockdev writev readv 8 blocks ...passed 00:11:09.454 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.454 Test: blockdev writev readv block ...passed 00:11:09.454 Test: blockdev writev readv size > 128k ...passed 00:11:09.454 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.454 Test: blockdev comparev and writev ...passed 00:11:09.454 Test: blockdev nvme passthru rw ...passed 00:11:09.454 Test: blockdev nvme passthru vendor specific ...passed 00:11:09.454 Test: blockdev nvme admin passthru ...passed 00:11:09.454 Test: blockdev copy ...passed 00:11:09.454 Suite: bdevio tests on: TestPT 00:11:09.454 Test: blockdev write read block ...passed 00:11:09.454 Test: blockdev write zeroes read block ...passed 00:11:09.454 Test: blockdev write zeroes read no split ...passed 00:11:09.454 Test: blockdev write zeroes read split ...passed 00:11:09.454 Test: blockdev write zeroes read split partial ...passed 00:11:09.454 Test: blockdev reset ...passed 00:11:09.454 Test: blockdev write read 8 blocks ...passed 00:11:09.454 Test: blockdev write read size > 128k ...passed 00:11:09.454 Test: blockdev write read invalid size ...passed 00:11:09.454 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.454 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.454 Test: blockdev write read max offset ...passed 00:11:09.454 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.454 Test: blockdev writev readv 8 blocks ...passed 00:11:09.454 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.454 Test: blockdev writev readv block ...passed 00:11:09.454 Test: blockdev writev readv size > 128k ...passed 00:11:09.454 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.454 Test: blockdev comparev and writev ...passed 00:11:09.454 Test: blockdev nvme passthru rw ...passed 00:11:09.454 Test: blockdev nvme passthru vendor specific ...passed 00:11:09.454 Test: blockdev nvme admin passthru ...passed 00:11:09.454 Test: blockdev copy ...passed 00:11:09.454 Suite: bdevio tests on: Malloc2p7 00:11:09.454 Test: blockdev write read block ...passed 00:11:09.454 Test: blockdev write zeroes read block ...passed 00:11:09.454 Test: blockdev write zeroes read no split ...passed 00:11:09.713 Test: blockdev write zeroes read split ...passed 00:11:09.713 Test: blockdev write zeroes read split partial ...passed 00:11:09.713 Test: blockdev reset ...passed 00:11:09.713 Test: blockdev write read 8 blocks ...passed 00:11:09.713 Test: blockdev write read size > 128k ...passed 00:11:09.713 Test: blockdev write read invalid size ...passed 00:11:09.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.713 Test: blockdev write read max offset ...passed 00:11:09.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.713 Test: blockdev writev readv 8 blocks ...passed 00:11:09.713 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.713 Test: blockdev writev readv block ...passed 00:11:09.713 Test: blockdev writev readv size > 128k ...passed 00:11:09.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.713 Test: blockdev comparev and writev ...passed 00:11:09.713 Test: blockdev nvme passthru rw ...passed 00:11:09.713 Test: blockdev nvme passthru vendor specific ...passed 00:11:09.713 Test: blockdev nvme admin passthru ...passed 00:11:09.713 Test: blockdev copy ...passed 00:11:09.713 Suite: bdevio tests on: Malloc2p6 00:11:09.713 Test: blockdev write read block ...passed 00:11:09.713 Test: blockdev write zeroes read block ...passed 00:11:09.713 Test: blockdev write zeroes read no split ...passed 00:11:09.713 Test: blockdev write zeroes read split ...passed 00:11:09.713 Test: blockdev write zeroes read split partial ...passed 00:11:09.713 Test: blockdev reset ...passed 00:11:09.713 Test: blockdev write read 8 blocks ...passed 00:11:09.713 Test: blockdev write read size > 128k ...passed 00:11:09.713 Test: blockdev write read invalid size ...passed 00:11:09.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.713 Test: blockdev write read max offset ...passed 00:11:09.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.713 Test: blockdev writev readv 8 blocks ...passed 00:11:09.713 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.713 Test: blockdev writev readv block ...passed 00:11:09.713 Test: blockdev writev readv size > 128k ...passed 00:11:09.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.713 Test: blockdev comparev and writev ...passed 00:11:09.713 Test: blockdev nvme passthru rw ...passed 00:11:09.713 Test: blockdev nvme passthru vendor specific ...passed 00:11:09.713 Test: blockdev nvme admin passthru ...passed 00:11:09.713 Test: blockdev copy ...passed 00:11:09.713 Suite: bdevio tests on: Malloc2p5 00:11:09.713 Test: blockdev write read block ...passed 00:11:09.713 Test: blockdev write zeroes read block ...passed 00:11:09.713 Test: blockdev write zeroes read no split ...passed 00:11:09.713 Test: blockdev write zeroes read split ...passed 00:11:09.713 Test: blockdev write zeroes read split partial ...passed 00:11:09.713 Test: blockdev reset ...passed 00:11:09.713 Test: blockdev write read 8 blocks ...passed 00:11:09.713 Test: blockdev write read size > 128k ...passed 00:11:09.713 Test: blockdev write read invalid size ...passed 00:11:09.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.713 Test: blockdev write read max offset ...passed 00:11:09.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.713 Test: blockdev writev readv 8 blocks ...passed 00:11:09.713 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.713 Test: blockdev writev readv block ...passed 00:11:09.713 Test: blockdev writev readv size > 128k ...passed 00:11:09.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.713 Test: blockdev comparev and writev ...passed 00:11:09.713 Test: blockdev nvme passthru rw ...passed 00:11:09.713 Test: blockdev nvme passthru vendor specific ...passed 00:11:09.713 Test: blockdev nvme admin passthru ...passed 00:11:09.713 Test: blockdev copy ...passed 00:11:09.713 Suite: bdevio tests on: Malloc2p4 00:11:09.713 Test: blockdev write read block ...passed 00:11:09.713 Test: blockdev write zeroes read block ...passed 00:11:09.713 Test: blockdev write zeroes read no split ...passed 00:11:09.971 Test: blockdev write zeroes read split ...passed 00:11:09.971 Test: blockdev write zeroes read split partial ...passed 00:11:09.971 Test: blockdev reset ...passed 00:11:09.971 Test: blockdev write read 8 blocks ...passed 00:11:09.971 Test: blockdev write read size > 128k ...passed 00:11:09.971 Test: blockdev write read invalid size ...passed 00:11:09.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.971 Test: blockdev write read max offset ...passed 00:11:09.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.971 Test: blockdev writev readv 8 blocks ...passed 00:11:09.971 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.971 Test: blockdev writev readv block ...passed 00:11:09.971 Test: blockdev writev readv size > 128k ...passed 00:11:09.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.971 Test: blockdev comparev and writev ...passed 00:11:09.971 Test: blockdev nvme passthru rw ...passed 00:11:09.971 Test: blockdev nvme passthru vendor specific ...passed 00:11:09.971 Test: blockdev nvme admin passthru ...passed 00:11:09.971 Test: blockdev copy ...passed 00:11:09.971 Suite: bdevio tests on: Malloc2p3 00:11:09.971 Test: blockdev write read block ...passed 00:11:09.971 Test: blockdev write zeroes read block ...passed 00:11:09.971 Test: blockdev write zeroes read no split ...passed 00:11:09.971 Test: blockdev write zeroes read split ...passed 00:11:09.971 Test: blockdev write zeroes read split partial ...passed 00:11:09.971 Test: blockdev reset ...passed 00:11:09.971 Test: blockdev write read 8 blocks ...passed 00:11:09.971 Test: blockdev write read size > 128k ...passed 00:11:09.972 Test: blockdev write read invalid size ...passed 00:11:09.972 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.972 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.972 Test: blockdev write read max offset ...passed 00:11:09.972 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.972 Test: blockdev writev readv 8 blocks ...passed 00:11:09.972 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.972 Test: blockdev writev readv block ...passed 00:11:09.972 Test: blockdev writev readv size > 128k ...passed 00:11:09.972 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.972 Test: blockdev comparev and writev ...passed 00:11:09.972 Test: blockdev nvme passthru rw ...passed 00:11:09.972 Test: blockdev nvme passthru vendor specific ...passed 00:11:09.972 Test: blockdev nvme admin passthru ...passed 00:11:09.972 Test: blockdev copy ...passed 00:11:09.972 Suite: bdevio tests on: Malloc2p2 00:11:09.972 Test: blockdev write read block ...passed 00:11:09.972 Test: blockdev write zeroes read block ...passed 00:11:09.972 Test: blockdev write zeroes read no split ...passed 00:11:09.972 Test: blockdev write zeroes read split ...passed 00:11:09.972 Test: blockdev write zeroes read split partial ...passed 00:11:09.972 Test: blockdev reset ...passed 00:11:09.972 Test: blockdev write read 8 blocks ...passed 00:11:09.972 Test: blockdev write read size > 128k ...passed 00:11:09.972 Test: blockdev write read invalid size ...passed 00:11:09.972 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.972 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.972 Test: blockdev write read max offset ...passed 00:11:09.972 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.972 Test: blockdev writev readv 8 blocks ...passed 00:11:09.972 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.972 Test: blockdev writev readv block ...passed 00:11:09.972 Test: blockdev writev readv size > 128k ...passed 00:11:09.972 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.972 Test: blockdev comparev and writev ...passed 00:11:09.972 Test: blockdev nvme passthru rw ...passed 00:11:09.972 Test: blockdev nvme passthru vendor specific ...passed 00:11:09.972 Test: blockdev nvme admin passthru ...passed 00:11:09.972 Test: blockdev copy ...passed 00:11:09.972 Suite: bdevio tests on: Malloc2p1 00:11:09.972 Test: blockdev write read block ...passed 00:11:09.972 Test: blockdev write zeroes read block ...passed 00:11:09.972 Test: blockdev write zeroes read no split ...passed 00:11:09.972 Test: blockdev write zeroes read split ...passed 00:11:10.230 Test: blockdev write zeroes read split partial ...passed 00:11:10.230 Test: blockdev reset ...passed 00:11:10.230 Test: blockdev write read 8 blocks ...passed 00:11:10.230 Test: blockdev write read size > 128k ...passed 00:11:10.230 Test: blockdev write read invalid size ...passed 00:11:10.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:10.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:10.230 Test: blockdev write read max offset ...passed 00:11:10.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:10.230 Test: blockdev writev readv 8 blocks ...passed 00:11:10.230 Test: blockdev writev readv 30 x 1block ...passed 00:11:10.230 Test: blockdev writev readv block ...passed 00:11:10.230 Test: blockdev writev readv size > 128k ...passed 00:11:10.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:10.230 Test: blockdev comparev and writev ...passed 00:11:10.230 Test: blockdev nvme passthru rw ...passed 00:11:10.230 Test: blockdev nvme passthru vendor specific ...passed 00:11:10.230 Test: blockdev nvme admin passthru ...passed 00:11:10.230 Test: blockdev copy ...passed 00:11:10.230 Suite: bdevio tests on: Malloc2p0 00:11:10.230 Test: blockdev write read block ...passed 00:11:10.230 Test: blockdev write zeroes read block ...passed 00:11:10.230 Test: blockdev write zeroes read no split ...passed 00:11:10.230 Test: blockdev write zeroes read split ...passed 00:11:10.230 Test: blockdev write zeroes read split partial ...passed 00:11:10.230 Test: blockdev reset ...passed 00:11:10.230 Test: blockdev write read 8 blocks ...passed 00:11:10.230 Test: blockdev write read size > 128k ...passed 00:11:10.230 Test: blockdev write read invalid size ...passed 00:11:10.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:10.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:10.230 Test: blockdev write read max offset ...passed 00:11:10.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:10.230 Test: blockdev writev readv 8 blocks ...passed 00:11:10.230 Test: blockdev writev readv 30 x 1block ...passed 00:11:10.230 Test: blockdev writev readv block ...passed 00:11:10.230 Test: blockdev writev readv size > 128k ...passed 00:11:10.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:10.230 Test: blockdev comparev and writev ...passed 00:11:10.230 Test: blockdev nvme passthru rw ...passed 00:11:10.230 Test: blockdev nvme passthru vendor specific ...passed 00:11:10.230 Test: blockdev nvme admin passthru ...passed 00:11:10.230 Test: blockdev copy ...passed 00:11:10.230 Suite: bdevio tests on: Malloc1p1 00:11:10.230 Test: blockdev write read block ...passed 00:11:10.230 Test: blockdev write zeroes read block ...passed 00:11:10.230 Test: blockdev write zeroes read no split ...passed 00:11:10.230 Test: blockdev write zeroes read split ...passed 00:11:10.230 Test: blockdev write zeroes read split partial ...passed 00:11:10.230 Test: blockdev reset ...passed 00:11:10.230 Test: blockdev write read 8 blocks ...passed 00:11:10.230 Test: blockdev write read size > 128k ...passed 00:11:10.230 Test: blockdev write read invalid size ...passed 00:11:10.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:10.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:10.230 Test: blockdev write read max offset ...passed 00:11:10.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:10.230 Test: blockdev writev readv 8 blocks ...passed 00:11:10.230 Test: blockdev writev readv 30 x 1block ...passed 00:11:10.230 Test: blockdev writev readv block ...passed 00:11:10.230 Test: blockdev writev readv size > 128k ...passed 00:11:10.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:10.231 Test: blockdev comparev and writev ...passed 00:11:10.231 Test: blockdev nvme passthru rw ...passed 00:11:10.231 Test: blockdev nvme passthru vendor specific ...passed 00:11:10.231 Test: blockdev nvme admin passthru ...passed 00:11:10.231 Test: blockdev copy ...passed 00:11:10.231 Suite: bdevio tests on: Malloc1p0 00:11:10.231 Test: blockdev write read block ...passed 00:11:10.231 Test: blockdev write zeroes read block ...passed 00:11:10.231 Test: blockdev write zeroes read no split ...passed 00:11:10.231 Test: blockdev write zeroes read split ...passed 00:11:10.489 Test: blockdev write zeroes read split partial ...passed 00:11:10.489 Test: blockdev reset ...passed 00:11:10.489 Test: blockdev write read 8 blocks ...passed 00:11:10.489 Test: blockdev write read size > 128k ...passed 00:11:10.489 Test: blockdev write read invalid size ...passed 00:11:10.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:10.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:10.489 Test: blockdev write read max offset ...passed 00:11:10.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:10.489 Test: blockdev writev readv 8 blocks ...passed 00:11:10.489 Test: blockdev writev readv 30 x 1block ...passed 00:11:10.489 Test: blockdev writev readv block ...passed 00:11:10.489 Test: blockdev writev readv size > 128k ...passed 00:11:10.489 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:10.489 Test: blockdev comparev and writev ...passed 00:11:10.489 Test: blockdev nvme passthru rw ...passed 00:11:10.489 Test: blockdev nvme passthru vendor specific ...passed 00:11:10.489 Test: blockdev nvme admin passthru ...passed 00:11:10.489 Test: blockdev copy ...passed 00:11:10.489 Suite: bdevio tests on: Malloc0 00:11:10.489 Test: blockdev write read block ...passed 00:11:10.489 Test: blockdev write zeroes read block ...passed 00:11:10.489 Test: blockdev write zeroes read no split ...passed 00:11:10.489 Test: blockdev write zeroes read split ...passed 00:11:10.489 Test: blockdev write zeroes read split partial ...passed 00:11:10.489 Test: blockdev reset ...passed 00:11:10.489 Test: blockdev write read 8 blocks ...passed 00:11:10.489 Test: blockdev write read size > 128k ...passed 00:11:10.489 Test: blockdev write read invalid size ...passed 00:11:10.489 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:10.489 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:10.489 Test: blockdev write read max offset ...passed 00:11:10.489 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:10.489 Test: blockdev writev readv 8 blocks ...passed 00:11:10.489 Test: blockdev writev readv 30 x 1block ...passed 00:11:10.489 Test: blockdev writev readv block ...passed 00:11:10.489 Test: blockdev writev readv size > 128k ...passed 00:11:10.489 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:10.489 Test: blockdev comparev and writev ...passed 00:11:10.489 Test: blockdev nvme passthru rw ...passed 00:11:10.489 Test: blockdev nvme passthru vendor specific ...passed 00:11:10.489 Test: blockdev nvme admin passthru ...passed 00:11:10.489 Test: blockdev copy ...passed 00:11:10.489 00:11:10.489 Run Summary: Type Total Ran Passed Failed Inactive 00:11:10.489 suites 16 16 n/a 0 0 00:11:10.489 tests 368 368 368 0 0 00:11:10.489 asserts 2224 2224 2224 0 n/a 00:11:10.489 00:11:10.489 Elapsed time = 3.848 seconds 00:11:10.489 0 00:11:10.489 10:53:17 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 3515161 00:11:10.490 10:53:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 3515161 ']' 00:11:10.490 10:53:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 3515161 00:11:10.490 10:53:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:11:10.490 10:53:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.490 10:53:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3515161 00:11:10.490 10:53:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.490 10:53:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.490 10:53:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3515161' 00:11:10.490 killing process with pid 3515161 00:11:10.490 10:53:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@969 -- # kill 3515161 00:11:10.490 10:53:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@974 -- # wait 3515161 00:11:13.017 10:53:20 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:11:13.017 00:11:13.017 real 0m5.951s 00:11:13.017 user 0m15.212s 00:11:13.017 sys 0m0.664s 00:11:13.017 10:53:20 blockdev_general.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:13.017 10:53:20 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:13.017 ************************************ 00:11:13.017 END TEST bdev_bounds 00:11:13.017 ************************************ 00:11:13.276 10:53:20 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:13.276 10:53:20 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:13.276 10:53:20 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:13.276 10:53:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:13.276 ************************************ 00:11:13.276 START TEST bdev_nbd 00:11:13.276 ************************************ 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=3516238 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 3516238 /var/tmp/spdk-nbd.sock 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 3516238 ']' 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:13.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:13.276 10:53:20 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:13.276 [2024-07-25 10:53:20.299178] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:13.276 [2024-07-25 10:53:20.299265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:13.535 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:13.535 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:13.535 [2024-07-25 10:53:20.499691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.793 [2024-07-25 10:53:20.787728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.360 [2024-07-25 10:53:21.374851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:14.360 [2024-07-25 10:53:21.374926] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:14.360 [2024-07-25 10:53:21.374946] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:14.360 [2024-07-25 10:53:21.382805] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:14.360 [2024-07-25 10:53:21.382846] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:14.360 [2024-07-25 10:53:21.390812] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:14.360 [2024-07-25 10:53:21.390850] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:14.617 [2024-07-25 10:53:21.649503] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:14.617 [2024-07-25 10:53:21.649570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.617 [2024-07-25 10:53:21.649591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:11:14.617 [2024-07-25 10:53:21.649606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.617 [2024-07-25 10:53:21.652435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.617 [2024-07-25 10:53:21.652472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:15.184 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.444 1+0 records in 00:11:15.444 1+0 records out 00:11:15.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264199 s, 15.5 MB/s 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:15.444 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:15.771 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:15.771 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:15.771 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:15.771 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:15.771 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:15.771 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:15.772 1+0 records in 00:11:15.772 1+0 records out 00:11:15.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328265 s, 12.5 MB/s 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:15.772 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:16.030 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:16.030 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:16.030 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:16.030 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:11:16.030 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.030 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.030 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.030 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:11:16.030 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.030 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.030 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.031 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.031 1+0 records in 00:11:16.031 1+0 records out 00:11:16.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335407 s, 12.2 MB/s 00:11:16.031 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.031 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.031 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.031 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.031 10:53:22 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:16.031 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:16.031 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:16.031 10:53:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.290 1+0 records in 00:11:16.290 1+0 records out 00:11:16.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322162 s, 12.7 MB/s 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:16.290 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.548 1+0 records in 00:11:16.548 1+0 records out 00:11:16.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346624 s, 11.8 MB/s 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:16.548 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.806 1+0 records in 00:11:16.806 1+0 records out 00:11:16.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042507 s, 9.6 MB/s 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:16.806 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:17.065 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:17.065 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:17.065 10:53:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:17.065 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:11:17.065 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.065 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.065 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.065 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:11:17.065 10:53:23 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.065 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.065 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.065 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.065 1+0 records in 00:11:17.065 1+0 records out 00:11:17.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053079 s, 7.7 MB/s 00:11:17.065 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.065 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.065 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.065 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.065 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.065 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.065 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.065 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.324 1+0 records in 00:11:17.324 1+0 records out 00:11:17.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536456 s, 7.6 MB/s 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.324 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:17.582 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.583 1+0 records in 00:11:17.583 1+0 records out 00:11:17.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452909 s, 9.0 MB/s 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.583 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.842 1+0 records in 00:11:17.842 1+0 records out 00:11:17.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346982 s, 11.8 MB/s 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.842 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:18.101 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:18.101 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:18.101 10:53:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:18.101 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:11:18.101 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.101 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.101 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.101 10:53:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:11:18.101 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.101 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.101 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.101 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.101 1+0 records in 00:11:18.101 1+0 records out 00:11:18.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536256 s, 7.6 MB/s 00:11:18.101 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.101 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.101 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.101 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.101 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.101 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.101 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.101 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.360 1+0 records in 00:11:18.360 1+0 records out 00:11:18.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512869 s, 8.0 MB/s 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.360 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:18.618 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:18.618 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:18.618 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:18.618 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:11:18.618 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.618 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.618 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.618 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:11:18.618 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.618 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.619 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.619 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.619 1+0 records in 00:11:18.619 1+0 records out 00:11:18.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736795 s, 5.6 MB/s 00:11:18.619 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.619 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.619 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.619 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.619 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.619 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.619 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.619 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:18.876 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.877 1+0 records in 00:11:18.877 1+0 records out 00:11:18.877 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732777 s, 5.6 MB/s 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.877 10:53:25 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:19.134 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:19.134 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:19.134 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:19.134 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:11:19.134 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:19.134 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:19.134 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:19.134 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:11:19.135 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:19.135 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:19.135 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:19.135 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.135 1+0 records in 00:11:19.135 1+0 records out 00:11:19.135 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674908 s, 6.1 MB/s 00:11:19.135 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.135 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:19.135 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.135 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:19.135 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:19.135 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.135 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.135 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.392 1+0 records in 00:11:19.392 1+0 records out 00:11:19.392 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578186 s, 7.1 MB/s 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.392 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:19.651 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd0", 00:11:19.651 "bdev_name": "Malloc0" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd1", 00:11:19.651 "bdev_name": "Malloc1p0" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd2", 00:11:19.651 "bdev_name": "Malloc1p1" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd3", 00:11:19.651 "bdev_name": "Malloc2p0" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd4", 00:11:19.651 "bdev_name": "Malloc2p1" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd5", 00:11:19.651 "bdev_name": "Malloc2p2" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd6", 00:11:19.651 "bdev_name": "Malloc2p3" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd7", 00:11:19.651 "bdev_name": "Malloc2p4" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd8", 00:11:19.651 "bdev_name": "Malloc2p5" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd9", 00:11:19.651 "bdev_name": "Malloc2p6" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd10", 00:11:19.651 "bdev_name": "Malloc2p7" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd11", 00:11:19.651 "bdev_name": "TestPT" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd12", 00:11:19.651 "bdev_name": "raid0" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd13", 00:11:19.651 "bdev_name": "concat0" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd14", 00:11:19.651 "bdev_name": "raid1" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd15", 00:11:19.651 "bdev_name": "AIO0" 00:11:19.651 } 00:11:19.651 ]' 00:11:19.651 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:19.651 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd0", 00:11:19.651 "bdev_name": "Malloc0" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd1", 00:11:19.651 "bdev_name": "Malloc1p0" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd2", 00:11:19.651 "bdev_name": "Malloc1p1" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd3", 00:11:19.651 "bdev_name": "Malloc2p0" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd4", 00:11:19.651 "bdev_name": "Malloc2p1" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd5", 00:11:19.651 "bdev_name": "Malloc2p2" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd6", 00:11:19.651 "bdev_name": "Malloc2p3" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd7", 00:11:19.651 "bdev_name": "Malloc2p4" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd8", 00:11:19.651 "bdev_name": "Malloc2p5" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd9", 00:11:19.651 "bdev_name": "Malloc2p6" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd10", 00:11:19.651 "bdev_name": "Malloc2p7" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd11", 00:11:19.651 "bdev_name": "TestPT" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd12", 00:11:19.651 "bdev_name": "raid0" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd13", 00:11:19.651 "bdev_name": "concat0" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd14", 00:11:19.651 "bdev_name": "raid1" 00:11:19.651 }, 00:11:19.651 { 00:11:19.651 "nbd_device": "/dev/nbd15", 00:11:19.651 "bdev_name": "AIO0" 00:11:19.651 } 00:11:19.651 ]' 00:11:19.651 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:19.651 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:19.651 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:19.651 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:11:19.651 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:19.651 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:19.651 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.651 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:19.910 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:19.910 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:19.910 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:19.910 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:19.910 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:19.910 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:19.910 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:19.910 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:19.910 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:19.910 10:53:26 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:20.169 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:20.169 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:20.169 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:20.169 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.169 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.169 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:20.169 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:20.169 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.169 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.169 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:20.427 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:20.427 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:20.427 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:20.427 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.427 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.427 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:20.427 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:20.427 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.427 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.427 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:20.685 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:20.685 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:20.685 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:20.685 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.685 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.685 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:20.685 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:20.685 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.685 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.685 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:20.943 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:20.943 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:20.943 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:20.943 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:20.943 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:20.943 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:20.943 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:20.943 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:20.943 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.943 10:53:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:21.202 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:21.202 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:21.202 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:21.202 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.202 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.202 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:21.202 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.202 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.202 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.202 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:21.460 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:21.460 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:21.460 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:21.460 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.460 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.460 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:21.460 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.460 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.460 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.460 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:21.719 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:21.719 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:21.719 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:21.719 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.719 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.719 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:21.719 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.719 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.719 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.719 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:21.978 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:21.978 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:21.978 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:21.978 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.978 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.978 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:21.978 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.978 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.978 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.978 10:53:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:22.237 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:22.237 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:22.237 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:22.237 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.237 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.237 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:22.237 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.237 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.237 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.237 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:22.496 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:22.496 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:22.496 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:22.496 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.496 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.496 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:22.496 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.496 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.496 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.496 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:22.496 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.755 10:53:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:23.013 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:23.013 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:23.013 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:23.013 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.013 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.013 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:23.013 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.013 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.013 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.013 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:23.272 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:23.272 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:23.272 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:23.272 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.272 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.272 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:23.272 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.272 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.272 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.272 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:23.531 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:23.531 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:23.531 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:23.531 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.531 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.531 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:23.531 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.531 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.531 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:23.531 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.531 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:23.790 10:53:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:24.049 /dev/nbd0 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.049 1+0 records in 00:11:24.049 1+0 records out 00:11:24.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274696 s, 14.9 MB/s 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:24.049 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:24.617 /dev/nbd1 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.617 1+0 records in 00:11:24.617 1+0 records out 00:11:24.617 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291264 s, 14.1 MB/s 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:24.617 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:24.876 /dev/nbd10 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:24.876 1+0 records in 00:11:24.876 1+0 records out 00:11:24.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037506 s, 10.9 MB/s 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:24.876 10:53:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:25.136 /dev/nbd11 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.136 1+0 records in 00:11:25.136 1+0 records out 00:11:25.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349438 s, 11.7 MB/s 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:25.136 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:25.395 /dev/nbd12 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.395 1+0 records in 00:11:25.395 1+0 records out 00:11:25.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404394 s, 10.1 MB/s 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:25.395 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:25.654 /dev/nbd13 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.654 1+0 records in 00:11:25.654 1+0 records out 00:11:25.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439464 s, 9.3 MB/s 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:25.654 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:25.913 /dev/nbd14 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.913 1+0 records in 00:11:25.913 1+0 records out 00:11:25.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041928 s, 9.8 MB/s 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:25.913 10:53:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:26.172 /dev/nbd15 00:11:26.172 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:26.172 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:26.172 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd15 00:11:26.172 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:26.172 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.172 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.172 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd15 /proc/partitions 00:11:26.172 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:26.172 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.172 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.173 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.173 1+0 records in 00:11:26.173 1+0 records out 00:11:26.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453779 s, 9.0 MB/s 00:11:26.173 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:26.173 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:26.173 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:26.173 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.173 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:26.173 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.173 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:26.173 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:26.430 /dev/nbd2 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.430 1+0 records in 00:11:26.430 1+0 records out 00:11:26.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426746 s, 9.6 MB/s 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:26.430 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:26.689 /dev/nbd3 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.689 1+0 records in 00:11:26.689 1+0 records out 00:11:26.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548585 s, 7.5 MB/s 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:26.689 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:26.948 /dev/nbd4 00:11:26.948 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:26.948 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:26.948 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.949 1+0 records in 00:11:26.949 1+0 records out 00:11:26.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577284 s, 7.1 MB/s 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:26.949 10:53:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:27.208 /dev/nbd5 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.208 1+0 records in 00:11:27.208 1+0 records out 00:11:27.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683322 s, 6.0 MB/s 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.208 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:27.466 /dev/nbd6 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.467 1+0 records in 00:11:27.467 1+0 records out 00:11:27.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672078 s, 6.1 MB/s 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.467 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:27.726 /dev/nbd7 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd7 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd7 /proc/partitions 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.726 1+0 records in 00:11:27.726 1+0 records out 00:11:27.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000750199 s, 5.5 MB/s 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.726 10:53:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:27.987 /dev/nbd8 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd8 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd8 /proc/partitions 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.987 1+0 records in 00:11:27.987 1+0 records out 00:11:27.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715238 s, 5.7 MB/s 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.987 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:28.329 /dev/nbd9 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd9 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd9 /proc/partitions 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.329 1+0 records in 00:11:28.329 1+0 records out 00:11:28.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587869 s, 7.0 MB/s 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:28.329 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd0", 00:11:28.589 "bdev_name": "Malloc0" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd1", 00:11:28.589 "bdev_name": "Malloc1p0" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd10", 00:11:28.589 "bdev_name": "Malloc1p1" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd11", 00:11:28.589 "bdev_name": "Malloc2p0" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd12", 00:11:28.589 "bdev_name": "Malloc2p1" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd13", 00:11:28.589 "bdev_name": "Malloc2p2" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd14", 00:11:28.589 "bdev_name": "Malloc2p3" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd15", 00:11:28.589 "bdev_name": "Malloc2p4" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd2", 00:11:28.589 "bdev_name": "Malloc2p5" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd3", 00:11:28.589 "bdev_name": "Malloc2p6" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd4", 00:11:28.589 "bdev_name": "Malloc2p7" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd5", 00:11:28.589 "bdev_name": "TestPT" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd6", 00:11:28.589 "bdev_name": "raid0" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd7", 00:11:28.589 "bdev_name": "concat0" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd8", 00:11:28.589 "bdev_name": "raid1" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd9", 00:11:28.589 "bdev_name": "AIO0" 00:11:28.589 } 00:11:28.589 ]' 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd0", 00:11:28.589 "bdev_name": "Malloc0" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd1", 00:11:28.589 "bdev_name": "Malloc1p0" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd10", 00:11:28.589 "bdev_name": "Malloc1p1" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd11", 00:11:28.589 "bdev_name": "Malloc2p0" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd12", 00:11:28.589 "bdev_name": "Malloc2p1" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd13", 00:11:28.589 "bdev_name": "Malloc2p2" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd14", 00:11:28.589 "bdev_name": "Malloc2p3" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd15", 00:11:28.589 "bdev_name": "Malloc2p4" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd2", 00:11:28.589 "bdev_name": "Malloc2p5" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd3", 00:11:28.589 "bdev_name": "Malloc2p6" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd4", 00:11:28.589 "bdev_name": "Malloc2p7" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd5", 00:11:28.589 "bdev_name": "TestPT" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd6", 00:11:28.589 "bdev_name": "raid0" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd7", 00:11:28.589 "bdev_name": "concat0" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd8", 00:11:28.589 "bdev_name": "raid1" 00:11:28.589 }, 00:11:28.589 { 00:11:28.589 "nbd_device": "/dev/nbd9", 00:11:28.589 "bdev_name": "AIO0" 00:11:28.589 } 00:11:28.589 ]' 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:28.589 /dev/nbd1 00:11:28.589 /dev/nbd10 00:11:28.589 /dev/nbd11 00:11:28.589 /dev/nbd12 00:11:28.589 /dev/nbd13 00:11:28.589 /dev/nbd14 00:11:28.589 /dev/nbd15 00:11:28.589 /dev/nbd2 00:11:28.589 /dev/nbd3 00:11:28.589 /dev/nbd4 00:11:28.589 /dev/nbd5 00:11:28.589 /dev/nbd6 00:11:28.589 /dev/nbd7 00:11:28.589 /dev/nbd8 00:11:28.589 /dev/nbd9' 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:28.589 /dev/nbd1 00:11:28.589 /dev/nbd10 00:11:28.589 /dev/nbd11 00:11:28.589 /dev/nbd12 00:11:28.589 /dev/nbd13 00:11:28.589 /dev/nbd14 00:11:28.589 /dev/nbd15 00:11:28.589 /dev/nbd2 00:11:28.589 /dev/nbd3 00:11:28.589 /dev/nbd4 00:11:28.589 /dev/nbd5 00:11:28.589 /dev/nbd6 00:11:28.589 /dev/nbd7 00:11:28.589 /dev/nbd8 00:11:28.589 /dev/nbd9' 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:28.589 256+0 records in 00:11:28.589 256+0 records out 00:11:28.589 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465779 s, 225 MB/s 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:28.589 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:28.849 256+0 records in 00:11:28.849 256+0 records out 00:11:28.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.171015 s, 6.1 MB/s 00:11:28.849 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:28.849 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:29.107 256+0 records in 00:11:29.107 256+0 records out 00:11:29.107 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178926 s, 5.9 MB/s 00:11:29.107 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.107 10:53:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:29.107 256+0 records in 00:11:29.107 256+0 records out 00:11:29.108 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178005 s, 5.9 MB/s 00:11:29.108 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.108 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:29.366 256+0 records in 00:11:29.366 256+0 records out 00:11:29.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178744 s, 5.9 MB/s 00:11:29.366 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.366 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:29.366 256+0 records in 00:11:29.366 256+0 records out 00:11:29.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141638 s, 7.4 MB/s 00:11:29.625 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.625 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:29.625 256+0 records in 00:11:29.625 256+0 records out 00:11:29.625 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122364 s, 8.6 MB/s 00:11:29.625 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.625 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:29.884 256+0 records in 00:11:29.884 256+0 records out 00:11:29.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15629 s, 6.7 MB/s 00:11:29.884 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.884 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:29.884 256+0 records in 00:11:29.884 256+0 records out 00:11:29.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.109392 s, 9.6 MB/s 00:11:29.884 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.884 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:29.884 256+0 records in 00:11:29.884 256+0 records out 00:11:29.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104355 s, 10.0 MB/s 00:11:29.884 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:29.884 10:53:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:30.142 256+0 records in 00:11:30.142 256+0 records out 00:11:30.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0983156 s, 10.7 MB/s 00:11:30.142 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.142 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:30.142 256+0 records in 00:11:30.142 256+0 records out 00:11:30.142 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.110029 s, 9.5 MB/s 00:11:30.142 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.142 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:30.401 256+0 records in 00:11:30.401 256+0 records out 00:11:30.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.100262 s, 10.5 MB/s 00:11:30.401 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.401 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:11:30.401 256+0 records in 00:11:30.401 256+0 records out 00:11:30.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164062 s, 6.4 MB/s 00:11:30.401 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.401 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:11:30.660 256+0 records in 00:11:30.660 256+0 records out 00:11:30.660 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180387 s, 5.8 MB/s 00:11:30.660 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.660 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:11:30.919 256+0 records in 00:11:30.920 256+0 records out 00:11:30.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162907 s, 6.4 MB/s 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:11:30.920 256+0 records in 00:11:30.920 256+0 records out 00:11:30.920 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12931 s, 8.1 MB/s 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.920 10:53:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:30.920 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.920 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:30.920 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.920 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:30.920 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.920 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:30.920 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:30.920 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.179 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:31.438 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:31.438 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:31.438 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:31.438 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.438 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.438 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:31.438 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.438 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.438 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.438 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:31.697 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:31.697 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:31.697 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:31.697 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.697 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.697 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:31.697 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.697 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.697 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.697 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:31.956 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:31.956 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:31.956 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:31.956 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.956 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.956 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:31.956 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:31.956 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.956 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.956 10:53:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:32.215 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:32.215 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:32.215 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:32.215 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.215 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.215 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:32.215 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:32.215 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.215 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.215 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:32.473 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:32.473 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:32.473 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:32.473 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.473 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.473 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:32.473 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:32.473 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.473 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.473 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:32.732 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:32.732 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:32.732 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:32.732 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.732 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.732 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:32.732 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:32.732 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.732 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.732 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:32.990 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:32.990 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:32.990 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:32.990 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.990 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.990 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:32.990 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:32.990 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.990 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.990 10:53:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:33.249 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:33.249 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:33.249 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:33.249 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.249 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.249 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:33.249 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.249 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.249 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.249 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:33.507 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:33.507 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:33.507 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:33.507 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.507 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.507 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:33.507 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.507 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.507 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.507 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.766 10:53:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:34.023 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:34.023 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:34.023 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:34.023 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.023 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.023 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:34.024 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.024 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.024 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.024 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:34.282 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:34.282 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:34.282 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:34.282 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.282 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.282 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:34.282 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.282 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.282 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.282 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:34.540 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:34.540 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:34.540 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:34.540 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.540 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.540 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:34.540 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.540 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.540 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.540 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:34.799 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:34.799 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:34.799 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:34.799 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.799 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.799 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:34.799 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.799 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.799 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.799 10:53:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:35.058 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:35.058 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:35.058 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:35.058 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.058 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.058 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:35.058 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.058 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.058 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:35.058 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.058 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:35.317 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:35.576 malloc_lvol_verify 00:11:35.577 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:35.836 1f841e01-afca-4938-ab22-0e748bbbc937 00:11:35.836 10:53:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:36.095 2ebaa954-096c-450a-9878-79796e1baff2 00:11:36.095 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:36.355 /dev/nbd0 00:11:36.355 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:36.355 mke2fs 1.46.5 (30-Dec-2021) 00:11:36.355 Discarding device blocks: 0/4096 done 00:11:36.355 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:36.355 00:11:36.355 Allocating group tables: 0/1 done 00:11:36.355 Writing inode tables: 0/1 done 00:11:36.355 Creating journal (1024 blocks): done 00:11:36.355 Writing superblocks and filesystem accounting information: 0/1 done 00:11:36.355 00:11:36.355 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:36.355 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:36.355 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.355 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:36.355 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:36.355 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:36.355 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.355 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 3516238 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 3516238 ']' 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 3516238 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3516238 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3516238' 00:11:36.614 killing process with pid 3516238 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@969 -- # kill 3516238 00:11:36.614 10:53:43 blockdev_general.bdev_nbd -- common/autotest_common.sh@974 -- # wait 3516238 00:11:40.806 10:53:47 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:11:40.806 00:11:40.806 real 0m26.921s 00:11:40.806 user 0m32.403s 00:11:40.806 sys 0m12.959s 00:11:40.806 10:53:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.806 10:53:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:40.806 ************************************ 00:11:40.806 END TEST bdev_nbd 00:11:40.806 ************************************ 00:11:40.806 10:53:47 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:11:40.806 10:53:47 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']' 00:11:40.806 10:53:47 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']' 00:11:40.806 10:53:47 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:11:40.806 10:53:47 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:40.806 10:53:47 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.806 10:53:47 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:40.806 ************************************ 00:11:40.806 START TEST bdev_fio 00:11:40.806 ************************************ 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:11:40.806 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]' 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7 00:11:40.806 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]' 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]' 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]' 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]' 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]' 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.807 10:53:47 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:40.807 ************************************ 00:11:40.807 START TEST bdev_fio_rw_verify 00:11:40.807 ************************************ 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:11:40.807 10:53:47 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:40.807 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:40.807 fio-3.35 00:11:40.807 Starting 16 threads 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:41.067 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:41.067 EAL: Requested device 0000:3f:02.7 cannot be used 00:11:53.340 00:11:53.340 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=3521950: Thu Jul 25 10:53:59 2024 00:11:53.340 read: IOPS=88.1k, BW=344MiB/s (361MB/s)(3443MiB/10001msec) 00:11:53.340 slat (usec): min=2, max=422, avg=36.81, stdev=15.47 00:11:53.340 clat (usec): min=12, max=1152, avg=295.38, stdev=136.90 00:11:53.340 lat (usec): min=20, max=1267, avg=332.19, stdev=145.01 00:11:53.340 clat percentiles (usec): 00:11:53.340 | 50.000th=[ 289], 99.000th=[ 627], 99.900th=[ 758], 99.990th=[ 840], 00:11:53.340 | 99.999th=[ 979] 00:11:53.340 write: IOPS=139k, BW=544MiB/s (571MB/s)(5364MiB/9853msec); 0 zone resets 00:11:53.340 slat (usec): min=8, max=461, avg=50.43, stdev=16.43 00:11:53.340 clat (usec): min=14, max=1411, avg=347.67, stdev=158.65 00:11:53.340 lat (usec): min=40, max=1487, avg=398.10, stdev=166.57 00:11:53.340 clat percentiles (usec): 00:11:53.340 | 50.000th=[ 334], 99.000th=[ 799], 99.900th=[ 930], 99.990th=[ 1004], 00:11:53.340 | 99.999th=[ 1156] 00:11:53.340 bw ( KiB/s): min=479976, max=689238, per=98.77%, avg=550666.42, stdev=3175.25, samples=304 00:11:53.340 iops : min=119994, max=172309, avg=137666.26, stdev=793.79, samples=304 00:11:53.340 lat (usec) : 20=0.01%, 50=0.45%, 100=4.27%, 250=30.76%, 500=50.88% 00:11:53.340 lat (usec) : 750=12.69%, 1000=0.94% 00:11:53.340 lat (msec) : 2=0.01% 00:11:53.340 cpu : usr=98.54%, sys=0.78%, ctx=610, majf=0, minf=112435 00:11:53.340 IO depths : 1=12.4%, 2=24.7%, 4=50.3%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:53.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.340 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.340 issued rwts: total=881520,1373291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.340 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:53.340 00:11:53.340 Run status group 0 (all jobs): 00:11:53.340 READ: bw=344MiB/s (361MB/s), 344MiB/s-344MiB/s (361MB/s-361MB/s), io=3443MiB (3611MB), run=10001-10001msec 00:11:53.340 WRITE: bw=544MiB/s (571MB/s), 544MiB/s-544MiB/s (571MB/s-571MB/s), io=5364MiB (5625MB), run=9853-9853msec 00:11:55.879 ----------------------------------------------------- 00:11:55.879 Suppressions used: 00:11:55.879 count bytes template 00:11:55.879 16 140 /usr/src/fio/parse.c 00:11:55.879 11916 1143936 /usr/src/fio/iolog.c 00:11:55.879 1 8 libtcmalloc_minimal.so 00:11:55.879 1 904 libcrypto.so 00:11:55.879 ----------------------------------------------------- 00:11:55.879 00:11:55.879 00:11:55.879 real 0m15.459s 00:11:55.879 user 2m55.373s 00:11:55.879 sys 0m2.518s 00:11:55.879 10:54:02 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.879 10:54:02 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:11:55.879 ************************************ 00:11:55.879 END TEST bdev_fio_rw_verify 00:11:55.879 ************************************ 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:11:55.879 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:55.881 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "48c785e8-73be-4058-a5bc-d65927ffc9ab"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "48c785e8-73be-4058-a5bc-d65927ffc9ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "4c486317-f141-551d-a7b1-ee46b02c8702"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4c486317-f141-551d-a7b1-ee46b02c8702",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "29676677-c435-5077-aec0-f1dc507d0bf6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "29676677-c435-5077-aec0-f1dc507d0bf6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "8acff103-4115-5953-a165-7533217418e7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8acff103-4115-5953-a165-7533217418e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "fd1229df-8770-50ce-986b-501b51bb8807"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fd1229df-8770-50ce-986b-501b51bb8807",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f038e4ee-9531-5903-9264-ad813f8c6f45"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f038e4ee-9531-5903-9264-ad813f8c6f45",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "1a0f0e80-201f-5b58-a22c-e608ecb25d1e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1a0f0e80-201f-5b58-a22c-e608ecb25d1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "803b368a-7019-5edb-a011-a3dfbecc528a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "803b368a-7019-5edb-a011-a3dfbecc528a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "874dd9e9-e948-5810-8698-728f69d16eee"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "874dd9e9-e948-5810-8698-728f69d16eee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "53c06951-3d4e-5f0d-8162-a369593f8de2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "53c06951-3d4e-5f0d-8162-a369593f8de2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "f1bbb12f-d2fc-5d2d-b8bc-cbef6983a658"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f1bbb12f-d2fc-5d2d-b8bc-cbef6983a658",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "15f8b0de-7e5e-5ecb-bf60-91f623791d50"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "15f8b0de-7e5e-5ecb-bf60-91f623791d50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "928e9527-ca4d-4c7c-9fa7-80fe7b36d02c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "928e9527-ca4d-4c7c-9fa7-80fe7b36d02c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "928e9527-ca4d-4c7c-9fa7-80fe7b36d02c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "f2319c57-6c51-4f84-ae9d-332b36f69832",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "4e36abdb-0402-4bd2-a443-f6151dcc5e57",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "65e17940-7cfd-4572-a9b7-6d8e948463de"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "65e17940-7cfd-4572-a9b7-6d8e948463de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "65e17940-7cfd-4572-a9b7-6d8e948463de",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "97253799-5b6d-4f5b-b721-2d2bb1dfcd92",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "5b9c2b5e-39a4-42e0-afc1-008e1b24906b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "89f0e264-b5dd-4f84-8b33-49429b82ed91"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "89f0e264-b5dd-4f84-8b33-49429b82ed91",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "89f0e264-b5dd-4f84-8b33-49429b82ed91",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "27c805ed-92cf-4680-8319-70d39a03ea8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "ecc94859-0111-430d-accc-948d3eac049a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "0e4695f5-1296-4e7b-a0fc-52e48de70e63"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "0e4695f5-1296-4e7b-a0fc-52e48de70e63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:55.881 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0 00:11:55.881 Malloc1p0 00:11:55.881 Malloc1p1 00:11:55.881 Malloc2p0 00:11:55.881 Malloc2p1 00:11:55.881 Malloc2p2 00:11:55.881 Malloc2p3 00:11:55.881 Malloc2p4 00:11:55.881 Malloc2p5 00:11:55.881 Malloc2p6 00:11:55.881 Malloc2p7 00:11:55.881 TestPT 00:11:55.881 raid0 00:11:55.881 concat0 ]] 00:11:55.881 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "48c785e8-73be-4058-a5bc-d65927ffc9ab"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "48c785e8-73be-4058-a5bc-d65927ffc9ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "4c486317-f141-551d-a7b1-ee46b02c8702"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "4c486317-f141-551d-a7b1-ee46b02c8702",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "29676677-c435-5077-aec0-f1dc507d0bf6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "29676677-c435-5077-aec0-f1dc507d0bf6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "8acff103-4115-5953-a165-7533217418e7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8acff103-4115-5953-a165-7533217418e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "fd1229df-8770-50ce-986b-501b51bb8807"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fd1229df-8770-50ce-986b-501b51bb8807",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "f038e4ee-9531-5903-9264-ad813f8c6f45"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f038e4ee-9531-5903-9264-ad813f8c6f45",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "1a0f0e80-201f-5b58-a22c-e608ecb25d1e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1a0f0e80-201f-5b58-a22c-e608ecb25d1e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "803b368a-7019-5edb-a011-a3dfbecc528a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "803b368a-7019-5edb-a011-a3dfbecc528a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "874dd9e9-e948-5810-8698-728f69d16eee"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "874dd9e9-e948-5810-8698-728f69d16eee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "53c06951-3d4e-5f0d-8162-a369593f8de2"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "53c06951-3d4e-5f0d-8162-a369593f8de2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "f1bbb12f-d2fc-5d2d-b8bc-cbef6983a658"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f1bbb12f-d2fc-5d2d-b8bc-cbef6983a658",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "15f8b0de-7e5e-5ecb-bf60-91f623791d50"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "15f8b0de-7e5e-5ecb-bf60-91f623791d50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "928e9527-ca4d-4c7c-9fa7-80fe7b36d02c"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "928e9527-ca4d-4c7c-9fa7-80fe7b36d02c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "928e9527-ca4d-4c7c-9fa7-80fe7b36d02c",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "f2319c57-6c51-4f84-ae9d-332b36f69832",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "4e36abdb-0402-4bd2-a443-f6151dcc5e57",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "65e17940-7cfd-4572-a9b7-6d8e948463de"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "65e17940-7cfd-4572-a9b7-6d8e948463de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "65e17940-7cfd-4572-a9b7-6d8e948463de",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "97253799-5b6d-4f5b-b721-2d2bb1dfcd92",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "5b9c2b5e-39a4-42e0-afc1-008e1b24906b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "89f0e264-b5dd-4f84-8b33-49429b82ed91"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "89f0e264-b5dd-4f84-8b33-49429b82ed91",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "89f0e264-b5dd-4f84-8b33-49429b82ed91",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "27c805ed-92cf-4680-8319-70d39a03ea8b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "ecc94859-0111-430d-accc-948d3eac049a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "0e4695f5-1296-4e7b-a0fc-52e48de70e63"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "0e4695f5-1296-4e7b-a0fc-52e48de70e63",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]' 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]' 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]' 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]' 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]' 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]' 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]' 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]' 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]' 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]' 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6 00:11:55.882 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]' 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]' 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]' 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]' 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.883 10:54:02 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:55.883 ************************************ 00:11:55.883 START TEST bdev_fio_trim 00:11:55.883 ************************************ 00:11:55.883 10:54:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:55.883 10:54:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:55.883 10:54:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:55.883 10:54:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:55.883 10:54:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:55.883 10:54:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:55.883 10:54:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:11:55.883 10:54:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:55.883 10:54:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:55.883 10:54:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:11:55.883 10:54:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:11:55.883 10:54:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:56.142 10:54:03 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:56.142 10:54:03 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:56.142 10:54:03 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:11:56.142 10:54:03 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:11:56.142 10:54:03 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:11:56.400 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:56.400 fio-3.35 00:11:56.400 Starting 14 threads 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:01.0 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:01.1 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:01.2 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:01.3 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:01.4 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:01.5 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:01.6 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:01.7 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:02.0 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:02.1 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:02.2 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:02.3 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:02.4 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:02.5 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:02.6 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3d:02.7 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3f:01.0 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3f:01.1 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3f:01.2 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3f:01.3 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3f:01.4 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3f:01.5 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3f:01.6 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3f:01.7 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.659 EAL: Requested device 0000:3f:02.0 cannot be used 00:11:56.659 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.660 EAL: Requested device 0000:3f:02.1 cannot be used 00:11:56.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.660 EAL: Requested device 0000:3f:02.2 cannot be used 00:11:56.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.660 EAL: Requested device 0000:3f:02.3 cannot be used 00:11:56.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.660 EAL: Requested device 0000:3f:02.4 cannot be used 00:11:56.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.660 EAL: Requested device 0000:3f:02.5 cannot be used 00:11:56.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.660 EAL: Requested device 0000:3f:02.6 cannot be used 00:11:56.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:11:56.660 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:08.866 00:12:08.866 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=3525183: Thu Jul 25 10:54:15 2024 00:12:08.866 write: IOPS=132k, BW=516MiB/s (541MB/s)(5158MiB/10001msec); 0 zone resets 00:12:08.866 slat (usec): min=4, max=410, avg=36.98, stdev= 9.08 00:12:08.866 clat (usec): min=26, max=1085, avg=266.66, stdev=83.68 00:12:08.866 lat (usec): min=33, max=1187, avg=303.63, stdev=85.77 00:12:08.866 clat percentiles (usec): 00:12:08.866 | 50.000th=[ 262], 99.000th=[ 449], 99.900th=[ 523], 99.990th=[ 644], 00:12:08.866 | 99.999th=[ 857] 00:12:08.866 bw ( KiB/s): min=484108, max=658335, per=100.00%, avg=529161.95, stdev=3343.81, samples=266 00:12:08.866 iops : min=121026, max=164583, avg=132288.53, stdev=835.96, samples=266 00:12:08.866 trim: IOPS=132k, BW=516MiB/s (541MB/s)(5158MiB/10001msec); 0 zone resets 00:12:08.866 slat (usec): min=4, max=421, avg=25.78, stdev= 6.51 00:12:08.866 clat (usec): min=7, max=1188, avg=299.28, stdev=89.15 00:12:08.866 lat (usec): min=17, max=1245, avg=325.05, stdev=91.08 00:12:08.866 clat percentiles (usec): 00:12:08.866 | 50.000th=[ 293], 99.000th=[ 494], 99.900th=[ 570], 99.990th=[ 701], 00:12:08.866 | 99.999th=[ 906] 00:12:08.866 bw ( KiB/s): min=484108, max=658351, per=100.00%, avg=529162.37, stdev=3344.05, samples=266 00:12:08.866 iops : min=121026, max=164587, avg=132288.74, stdev=836.03, samples=266 00:12:08.866 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.60%, 250=38.70% 00:12:08.866 lat (usec) : 500=60.22%, 750=0.47%, 1000=0.01% 00:12:08.866 lat (msec) : 2=0.01% 00:12:08.866 cpu : usr=99.58%, sys=0.04%, ctx=502, majf=0, minf=15673 00:12:08.866 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:08.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.866 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.866 issued rwts: total=0,1320427,1320430,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.866 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:08.866 00:12:08.866 Run status group 0 (all jobs): 00:12:08.866 WRITE: bw=516MiB/s (541MB/s), 516MiB/s-516MiB/s (541MB/s-541MB/s), io=5158MiB (5408MB), run=10001-10001msec 00:12:08.866 TRIM: bw=516MiB/s (541MB/s), 516MiB/s-516MiB/s (541MB/s-541MB/s), io=5158MiB (5408MB), run=10001-10001msec 00:12:11.397 ----------------------------------------------------- 00:12:11.397 Suppressions used: 00:12:11.397 count bytes template 00:12:11.397 14 129 /usr/src/fio/parse.c 00:12:11.397 1 8 libtcmalloc_minimal.so 00:12:11.397 1 904 libcrypto.so 00:12:11.397 ----------------------------------------------------- 00:12:11.397 00:12:11.397 00:12:11.397 real 0m15.364s 00:12:11.397 user 2m37.622s 00:12:11.397 sys 0m1.344s 00:12:11.397 10:54:18 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.397 10:54:18 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:12:11.397 ************************************ 00:12:11.397 END TEST bdev_fio_trim 00:12:11.397 ************************************ 00:12:11.397 10:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:12:11.397 10:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:12:11.397 10:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:12:11.397 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:12:11.397 10:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:12:11.397 00:12:11.397 real 0m31.195s 00:12:11.397 user 5m33.193s 00:12:11.397 sys 0m4.067s 00:12:11.397 10:54:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.397 10:54:18 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:11.397 ************************************ 00:12:11.397 END TEST bdev_fio 00:12:11.397 ************************************ 00:12:11.397 10:54:18 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:11.398 10:54:18 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:11.398 10:54:18 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:11.398 10:54:18 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.398 10:54:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:11.398 ************************************ 00:12:11.398 START TEST bdev_verify 00:12:11.398 ************************************ 00:12:11.398 10:54:18 blockdev_general.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:11.656 [2024-07-25 10:54:18.568498] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:11.656 [2024-07-25 10:54:18.568613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3527575 ] 00:12:11.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.656 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:11.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.656 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:11.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.656 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:11.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.656 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:11.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.656 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:11.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.656 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:11.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.656 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:11.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.656 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:11.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.656 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:11.656 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.656 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:11.657 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:11.657 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:11.915 [2024-07-25 10:54:18.792779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:12.174 [2024-07-25 10:54:19.077891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.174 [2024-07-25 10:54:19.077898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.742 [2024-07-25 10:54:19.646723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:12.742 [2024-07-25 10:54:19.646795] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:12.742 [2024-07-25 10:54:19.646818] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:12.742 [2024-07-25 10:54:19.654721] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:12.742 [2024-07-25 10:54:19.654765] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:12.742 [2024-07-25 10:54:19.662724] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:12.742 [2024-07-25 10:54:19.662761] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:13.000 [2024-07-25 10:54:19.926509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:13.000 [2024-07-25 10:54:19.926570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.000 [2024-07-25 10:54:19.926591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:12:13.000 [2024-07-25 10:54:19.926606] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.000 [2024-07-25 10:54:19.929378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.000 [2024-07-25 10:54:19.929427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:13.566 Running I/O for 5 seconds... 00:12:18.869 00:12:18.869 Latency(us) 00:12:18.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:18.869 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x0 length 0x1000 00:12:18.869 Malloc0 : 5.23 1052.78 4.11 0.00 0.00 121409.34 534.12 239914.19 00:12:18.869 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x1000 length 0x1000 00:12:18.869 Malloc0 : 5.28 1066.84 4.17 0.00 0.00 114569.66 557.06 186227.10 00:12:18.869 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x0 length 0x800 00:12:18.869 Malloc1p0 : 5.23 538.40 2.10 0.00 0.00 236791.39 3381.66 234881.02 00:12:18.869 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x800 length 0x800 00:12:18.869 Malloc1p0 : 5.22 538.98 2.11 0.00 0.00 237166.51 2451.05 239914.19 00:12:18.869 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x0 length 0x800 00:12:18.869 Malloc1p1 : 5.23 538.17 2.10 0.00 0.00 236101.99 3355.44 231525.58 00:12:18.869 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x800 length 0x800 00:12:18.869 Malloc1p1 : 5.23 538.72 2.10 0.00 0.00 236574.91 3342.34 233203.30 00:12:18.869 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x0 length 0x200 00:12:18.869 Malloc2p0 : 5.23 537.95 2.10 0.00 0.00 235448.29 3434.09 229847.86 00:12:18.869 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x200 length 0x200 00:12:18.869 Malloc2p0 : 5.23 538.49 2.10 0.00 0.00 235897.93 3355.44 229847.86 00:12:18.869 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x0 length 0x200 00:12:18.869 Malloc2p1 : 5.24 537.73 2.10 0.00 0.00 234767.53 3381.66 226492.42 00:12:18.869 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x200 length 0x200 00:12:18.869 Malloc2p1 : 5.23 538.26 2.10 0.00 0.00 235263.36 3407.87 228170.14 00:12:18.869 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x0 length 0x200 00:12:18.869 Malloc2p2 : 5.24 537.52 2.10 0.00 0.00 234078.99 3512.73 221459.25 00:12:18.869 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x200 length 0x200 00:12:18.869 Malloc2p2 : 5.23 538.04 2.10 0.00 0.00 234574.68 3381.66 224814.69 00:12:18.869 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x0 length 0x200 00:12:18.869 Malloc2p3 : 5.24 537.29 2.10 0.00 0.00 233413.08 3381.66 216426.09 00:12:18.869 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x200 length 0x200 00:12:18.869 Malloc2p3 : 5.24 537.82 2.10 0.00 0.00 233891.97 3486.52 219781.53 00:12:18.869 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x0 length 0x200 00:12:18.869 Malloc2p4 : 5.24 537.08 2.10 0.00 0.00 232746.44 3460.30 213909.50 00:12:18.869 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x200 length 0x200 00:12:18.869 Malloc2p4 : 5.24 537.61 2.10 0.00 0.00 233232.49 3407.87 216426.09 00:12:18.869 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x0 length 0x200 00:12:18.869 Malloc2p5 : 5.25 536.87 2.10 0.00 0.00 232063.79 3486.52 210554.06 00:12:18.869 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x200 length 0x200 00:12:18.869 Malloc2p5 : 5.24 537.40 2.10 0.00 0.00 232560.04 3407.87 212231.78 00:12:18.869 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x0 length 0x200 00:12:18.869 Malloc2p6 : 5.25 536.60 2.10 0.00 0.00 231385.28 3512.73 206359.76 00:12:18.869 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.869 Verification LBA range: start 0x200 length 0x200 00:12:18.870 Malloc2p6 : 5.24 537.18 2.10 0.00 0.00 231900.83 3460.30 208876.34 00:12:18.870 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.870 Verification LBA range: start 0x0 length 0x200 00:12:18.870 Malloc2p7 : 5.25 536.15 2.09 0.00 0.00 230839.93 2280.65 203004.31 00:12:18.870 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.870 Verification LBA range: start 0x200 length 0x200 00:12:18.870 Malloc2p7 : 5.24 536.97 2.10 0.00 0.00 231186.16 3512.73 205520.90 00:12:18.870 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.870 Verification LBA range: start 0x0 length 0x1000 00:12:18.870 TestPT : 5.26 535.70 2.09 0.00 0.00 230487.63 3158.84 200487.73 00:12:18.870 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.870 Verification LBA range: start 0x1000 length 0x1000 00:12:18.870 TestPT : 5.28 514.49 2.01 0.00 0.00 239487.61 11377.05 286890.39 00:12:18.870 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.870 Verification LBA range: start 0x0 length 0x2000 00:12:18.870 raid0 : 5.26 535.33 2.09 0.00 0.00 230030.10 3145.73 195454.57 00:12:18.870 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.870 Verification LBA range: start 0x2000 length 0x2000 00:12:18.870 raid0 : 5.25 536.56 2.10 0.00 0.00 229967.23 3211.26 186227.10 00:12:18.870 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.870 Verification LBA range: start 0x0 length 0x2000 00:12:18.870 concat0 : 5.26 535.11 2.09 0.00 0.00 229505.64 2857.37 194615.71 00:12:18.870 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.870 Verification LBA range: start 0x2000 length 0x2000 00:12:18.870 concat0 : 5.25 536.12 2.09 0.00 0.00 229520.08 3132.62 184549.38 00:12:18.870 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.870 Verification LBA range: start 0x0 length 0x1000 00:12:18.870 raid1 : 5.26 534.88 2.09 0.00 0.00 229073.44 3276.80 192099.12 00:12:18.870 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.870 Verification LBA range: start 0x1000 length 0x1000 00:12:18.870 raid1 : 5.26 535.66 2.09 0.00 0.00 229136.06 3303.01 182032.79 00:12:18.870 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:18.870 Verification LBA range: start 0x0 length 0x4e2 00:12:18.870 AIO0 : 5.28 557.80 2.18 0.00 0.00 219133.35 1140.33 191260.26 00:12:18.870 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:18.870 Verification LBA range: start 0x4e2 length 0x4e2 00:12:18.870 AIO0 : 5.28 557.49 2.18 0.00 0.00 219700.40 1566.31 187065.96 00:12:18.870 =================================================================================================================== 00:12:18.870 Total : 18252.01 71.30 0.00 0.00 218871.12 534.12 286890.39 00:12:22.156 00:12:22.156 real 0m10.493s 00:12:22.156 user 0m19.101s 00:12:22.156 sys 0m0.540s 00:12:22.156 10:54:28 blockdev_general.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:22.156 10:54:28 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:22.156 ************************************ 00:12:22.156 END TEST bdev_verify 00:12:22.156 ************************************ 00:12:22.156 10:54:29 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:22.156 10:54:29 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:12:22.156 10:54:29 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:22.156 10:54:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:22.156 ************************************ 00:12:22.156 START TEST bdev_verify_big_io 00:12:22.156 ************************************ 00:12:22.156 10:54:29 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:22.156 [2024-07-25 10:54:29.141674] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:22.156 [2024-07-25 10:54:29.141787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3529303 ] 00:12:22.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.156 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:22.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.156 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:22.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.156 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:22.156 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.156 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:22.415 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:22.415 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:22.415 [2024-07-25 10:54:29.364352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:22.673 [2024-07-25 10:54:29.652684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.673 [2024-07-25 10:54:29.652684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.240 [2024-07-25 10:54:30.243025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:23.240 [2024-07-25 10:54:30.243100] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:23.240 [2024-07-25 10:54:30.243119] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:23.240 [2024-07-25 10:54:30.251015] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:23.240 [2024-07-25 10:54:30.251060] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:23.240 [2024-07-25 10:54:30.259020] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:23.240 [2024-07-25 10:54:30.259058] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:23.498 [2024-07-25 10:54:30.512050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:23.498 [2024-07-25 10:54:30.512110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.498 [2024-07-25 10:54:30.512131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:12:23.498 [2024-07-25 10:54:30.512150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.499 [2024-07-25 10:54:30.514898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.499 [2024-07-25 10:54:30.514931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:24.066 [2024-07-25 10:54:31.025847] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.031101] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.036974] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.042236] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.048011] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.053737] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.059055] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.064775] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.069992] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.075777] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.081036] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.086690] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.091961] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.097751] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.102887] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:24.066 [2024-07-25 10:54:31.108597] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:24.325 [2024-07-25 10:54:31.238694] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:24.325 [2024-07-25 10:54:31.249219] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:24.325 Running I/O for 5 seconds... 00:12:32.444 00:12:32.444 Latency(us) 00:12:32.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.444 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x0 length 0x100 00:12:32.444 Malloc0 : 5.75 155.76 9.73 0.00 0.00 806030.98 871.63 2228014.28 00:12:32.444 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x100 length 0x100 00:12:32.444 Malloc0 : 5.91 151.72 9.48 0.00 0.00 827731.61 865.08 2550136.83 00:12:32.444 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x0 length 0x80 00:12:32.444 Malloc1p0 : 6.18 75.08 4.69 0.00 0.00 1563536.86 3001.55 2657511.01 00:12:32.444 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x80 length 0x80 00:12:32.444 Malloc1p0 : 6.49 54.24 3.39 0.00 0.00 2151332.38 2555.90 3516504.47 00:12:32.444 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x0 length 0x80 00:12:32.444 Malloc1p1 : 6.55 36.65 2.29 0.00 0.00 3068315.59 1900.54 5180804.30 00:12:32.444 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x80 length 0x80 00:12:32.444 Malloc1p1 : 6.80 37.66 2.35 0.00 0.00 2950662.89 1887.44 5019743.03 00:12:32.444 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x0 length 0x20 00:12:32.444 Malloc2p0 : 6.18 25.88 1.62 0.00 0.00 1097962.61 645.53 1905891.74 00:12:32.444 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x20 length 0x20 00:12:32.444 Malloc2p0 : 6.17 25.91 1.62 0.00 0.00 1082162.94 629.15 1657588.94 00:12:32.444 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x0 length 0x20 00:12:32.444 Malloc2p1 : 6.18 25.87 1.62 0.00 0.00 1088832.95 645.53 1879048.19 00:12:32.444 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x20 length 0x20 00:12:32.444 Malloc2p1 : 6.25 28.14 1.76 0.00 0.00 1006065.36 638.98 1630745.40 00:12:32.444 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x0 length 0x20 00:12:32.444 Malloc2p2 : 6.19 25.87 1.62 0.00 0.00 1079106.24 642.25 1852204.65 00:12:32.444 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x20 length 0x20 00:12:32.444 Malloc2p2 : 6.26 28.14 1.76 0.00 0.00 998122.78 642.25 1610612.74 00:12:32.444 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x0 length 0x20 00:12:32.444 Malloc2p3 : 6.19 25.86 1.62 0.00 0.00 1069685.52 642.25 1825361.10 00:12:32.444 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x20 length 0x20 00:12:32.444 Malloc2p3 : 6.26 28.13 1.76 0.00 0.00 988947.04 635.70 1583769.19 00:12:32.444 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x0 length 0x20 00:12:32.444 Malloc2p4 : 6.19 25.85 1.62 0.00 0.00 1060556.20 642.25 1798517.56 00:12:32.444 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:32.444 Verification LBA range: start 0x20 length 0x20 00:12:32.445 Malloc2p4 : 6.26 28.12 1.76 0.00 0.00 980458.90 635.70 1563636.53 00:12:32.445 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x0 length 0x20 00:12:32.445 Malloc2p5 : 6.19 25.85 1.62 0.00 0.00 1050544.92 642.25 1771674.01 00:12:32.445 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x20 length 0x20 00:12:32.445 Malloc2p5 : 6.26 28.12 1.76 0.00 0.00 971387.62 645.53 1536792.99 00:12:32.445 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x0 length 0x20 00:12:32.445 Malloc2p6 : 6.19 25.84 1.62 0.00 0.00 1041272.88 642.25 1758252.24 00:12:32.445 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x20 length 0x20 00:12:32.445 Malloc2p6 : 6.26 28.11 1.76 0.00 0.00 963338.17 648.81 1516660.33 00:12:32.445 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x0 length 0x20 00:12:32.445 Malloc2p7 : 6.26 28.10 1.76 0.00 0.00 956807.90 638.98 1731408.69 00:12:32.445 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x20 length 0x20 00:12:32.445 Malloc2p7 : 6.26 28.11 1.76 0.00 0.00 954765.38 632.42 1489816.78 00:12:32.445 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x0 length 0x100 00:12:32.445 TestPT : 6.79 38.27 2.39 0.00 0.00 2648911.08 88919.24 3946001.20 00:12:32.445 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x100 length 0x100 00:12:32.445 TestPT : 6.55 36.63 2.29 0.00 0.00 2823186.90 81369.50 3731252.84 00:12:32.445 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x0 length 0x200 00:12:32.445 raid0 : 6.86 42.00 2.63 0.00 0.00 2342100.96 1592.52 4617089.84 00:12:32.445 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x200 length 0x200 00:12:32.445 raid0 : 6.55 48.82 3.05 0.00 0.00 2076107.07 1585.97 4456028.57 00:12:32.445 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x0 length 0x200 00:12:32.445 concat0 : 6.86 48.99 3.06 0.00 0.00 1981098.62 1572.86 4456028.57 00:12:32.445 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x200 length 0x200 00:12:32.445 concat0 : 6.67 52.75 3.30 0.00 0.00 1851072.69 1572.86 4294967.30 00:12:32.445 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x0 length 0x100 00:12:32.445 raid1 : 6.86 62.96 3.94 0.00 0.00 1527617.23 2044.72 4268123.75 00:12:32.445 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x100 length 0x100 00:12:32.445 raid1 : 6.86 60.63 3.79 0.00 0.00 1584303.20 2018.51 4133906.02 00:12:32.445 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x0 length 0x4e 00:12:32.445 AIO0 : 6.87 65.67 4.10 0.00 0.00 871959.00 760.22 2711198.11 00:12:32.445 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:32.445 Verification LBA range: start 0x4e length 0x4e 00:12:32.445 AIO0 : 6.87 59.99 3.75 0.00 0.00 953212.10 760.22 2362232.01 00:12:32.445 =================================================================================================================== 00:12:32.445 Total : 1459.74 91.23 0.00 0.00 1431686.56 629.15 5180804.30 00:12:34.983 00:12:34.983 real 0m12.744s 00:12:34.983 user 0m23.516s 00:12:34.983 sys 0m0.591s 00:12:34.983 10:54:41 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.983 10:54:41 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.983 ************************************ 00:12:34.983 END TEST bdev_verify_big_io 00:12:34.983 ************************************ 00:12:34.983 10:54:41 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:34.983 10:54:41 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:34.983 10:54:41 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.983 10:54:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:34.983 ************************************ 00:12:34.983 START TEST bdev_write_zeroes 00:12:34.983 ************************************ 00:12:34.983 10:54:41 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:34.983 [2024-07-25 10:54:41.978011] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:34.983 [2024-07-25 10:54:41.978130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3531518 ] 00:12:35.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.242 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:35.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.242 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:35.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.242 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:35.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.242 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:35.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.242 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:35.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.242 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:35.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.242 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:35.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.242 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:35.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.242 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:35.242 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:35.243 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:35.243 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:35.243 [2024-07-25 10:54:42.204270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.502 [2024-07-25 10:54:42.469663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.069 [2024-07-25 10:54:43.049896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:36.070 [2024-07-25 10:54:43.049974] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:36.070 [2024-07-25 10:54:43.049994] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:36.070 [2024-07-25 10:54:43.057874] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:36.070 [2024-07-25 10:54:43.057916] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:36.070 [2024-07-25 10:54:43.065876] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:36.070 [2024-07-25 10:54:43.065914] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:36.328 [2024-07-25 10:54:43.313438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:36.328 [2024-07-25 10:54:43.313499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.328 [2024-07-25 10:54:43.313520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:12:36.328 [2024-07-25 10:54:43.313535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.328 [2024-07-25 10:54:43.316263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.328 [2024-07-25 10:54:43.316298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:36.895 Running I/O for 1 seconds... 00:12:37.831 00:12:37.831 Latency(us) 00:12:37.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.831 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 Malloc0 : 1.05 5010.12 19.57 0.00 0.00 25533.20 642.25 42781.90 00:12:37.831 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 Malloc1p0 : 1.05 5003.13 19.54 0.00 0.00 25525.69 884.74 41943.04 00:12:37.831 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 Malloc1p1 : 1.05 4996.17 19.52 0.00 0.00 25504.58 884.74 41104.18 00:12:37.831 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 Malloc2p0 : 1.05 4989.28 19.49 0.00 0.00 25484.37 891.29 40265.32 00:12:37.831 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 Malloc2p1 : 1.05 4982.38 19.46 0.00 0.00 25456.52 884.74 39216.74 00:12:37.831 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 Malloc2p2 : 1.05 4975.53 19.44 0.00 0.00 25436.24 884.74 38377.88 00:12:37.831 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 Malloc2p3 : 1.06 4968.68 19.41 0.00 0.00 25420.32 884.74 37539.02 00:12:37.831 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 Malloc2p4 : 1.06 4961.87 19.38 0.00 0.00 25402.28 884.74 36700.16 00:12:37.831 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 Malloc2p5 : 1.06 4955.05 19.36 0.00 0.00 25379.11 878.18 35651.58 00:12:37.831 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 Malloc2p6 : 1.06 4948.27 19.33 0.00 0.00 25357.01 884.74 34812.72 00:12:37.831 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 Malloc2p7 : 1.06 4941.52 19.30 0.00 0.00 25336.52 884.74 33973.86 00:12:37.831 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 TestPT : 1.06 4934.75 19.28 0.00 0.00 25316.05 917.50 33135.00 00:12:37.831 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 raid0 : 1.07 4926.83 19.25 0.00 0.00 25279.68 1671.17 31247.56 00:12:37.831 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 concat0 : 1.07 4919.08 19.22 0.00 0.00 25224.76 1664.61 29569.84 00:12:37.831 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 raid1 : 1.07 4909.33 19.18 0.00 0.00 25165.37 2673.87 26843.55 00:12:37.831 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:37.831 AIO0 : 1.07 4903.55 19.15 0.00 0.00 25077.51 1015.81 26109.54 00:12:37.831 =================================================================================================================== 00:12:37.831 Total : 79325.55 309.87 0.00 0.00 25368.70 642.25 42781.90 00:12:41.161 00:12:41.161 real 0m6.121s 00:12:41.161 user 0m5.482s 00:12:41.161 sys 0m0.535s 00:12:41.161 10:54:47 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:41.161 10:54:47 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:41.161 ************************************ 00:12:41.161 END TEST bdev_write_zeroes 00:12:41.161 ************************************ 00:12:41.161 10:54:48 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:41.162 10:54:48 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:41.162 10:54:48 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:41.162 10:54:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:41.162 ************************************ 00:12:41.162 START TEST bdev_json_nonenclosed 00:12:41.162 ************************************ 00:12:41.162 10:54:48 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:41.162 [2024-07-25 10:54:48.169284] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:41.162 [2024-07-25 10:54:48.169398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532596 ] 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.420 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:41.420 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:41.421 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:41.421 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:41.421 [2024-07-25 10:54:48.395457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.679 [2024-07-25 10:54:48.665438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.679 [2024-07-25 10:54:48.665533] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:41.679 [2024-07-25 10:54:48.665560] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:41.679 [2024-07-25 10:54:48.665576] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:42.247 00:12:42.247 real 0m1.161s 00:12:42.247 user 0m0.900s 00:12:42.247 sys 0m0.254s 00:12:42.247 10:54:49 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:42.247 10:54:49 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:42.247 ************************************ 00:12:42.247 END TEST bdev_json_nonenclosed 00:12:42.247 ************************************ 00:12:42.247 10:54:49 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:42.247 10:54:49 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:12:42.247 10:54:49 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:42.247 10:54:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:42.247 ************************************ 00:12:42.247 START TEST bdev_json_nonarray 00:12:42.247 ************************************ 00:12:42.247 10:54:49 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:42.506 [2024-07-25 10:54:49.407210] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:42.506 [2024-07-25 10:54:49.407324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3532779 ] 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:42.506 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:42.506 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:42.765 [2024-07-25 10:54:49.635086] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.024 [2024-07-25 10:54:49.903491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.024 [2024-07-25 10:54:49.903581] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:43.024 [2024-07-25 10:54:49.903609] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:43.024 [2024-07-25 10:54:49.903624] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:43.591 00:12:43.591 real 0m1.177s 00:12:43.591 user 0m0.904s 00:12:43.591 sys 0m0.265s 00:12:43.591 10:54:50 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.591 10:54:50 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:43.591 ************************************ 00:12:43.591 END TEST bdev_json_nonarray 00:12:43.591 ************************************ 00:12:43.591 10:54:50 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]] 00:12:43.591 10:54:50 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite '' 00:12:43.591 10:54:50 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:43.591 10:54:50 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.591 10:54:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:43.591 ************************************ 00:12:43.591 START TEST bdev_qos 00:12:43.591 ************************************ 00:12:43.591 10:54:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1125 -- # qos_test_suite '' 00:12:43.591 10:54:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=3533057 00:12:43.591 10:54:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 3533057' 00:12:43.591 Process qos testing pid: 3533057 00:12:43.591 10:54:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:12:43.591 10:54:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 3533057 00:12:43.591 10:54:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:12:43.591 10:54:50 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # '[' -z 3533057 ']' 00:12:43.591 10:54:50 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.591 10:54:50 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:43.591 10:54:50 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.591 10:54:50 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:43.591 10:54:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:43.591 [2024-07-25 10:54:50.650641] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:43.591 [2024-07-25 10:54:50.650761] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3533057 ] 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.850 EAL: Requested device 0000:3d:01.0 cannot be used 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.850 EAL: Requested device 0000:3d:01.1 cannot be used 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.850 EAL: Requested device 0000:3d:01.2 cannot be used 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.850 EAL: Requested device 0000:3d:01.3 cannot be used 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.850 EAL: Requested device 0000:3d:01.4 cannot be used 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.850 EAL: Requested device 0000:3d:01.5 cannot be used 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.850 EAL: Requested device 0000:3d:01.6 cannot be used 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.850 EAL: Requested device 0000:3d:01.7 cannot be used 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.850 EAL: Requested device 0000:3d:02.0 cannot be used 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.850 EAL: Requested device 0000:3d:02.1 cannot be used 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.850 EAL: Requested device 0000:3d:02.2 cannot be used 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.850 EAL: Requested device 0000:3d:02.3 cannot be used 00:12:43.850 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3d:02.4 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3d:02.5 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3d:02.6 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3d:02.7 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:01.0 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:01.1 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:01.2 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:01.3 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:01.4 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:01.5 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:01.6 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:01.7 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:02.0 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:02.1 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:02.2 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:02.3 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:02.4 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:02.5 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:02.6 cannot be used 00:12:43.851 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:12:43.851 EAL: Requested device 0000:3f:02.7 cannot be used 00:12:43.851 [2024-07-25 10:54:50.863471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.109 [2024-07-25 10:54:51.140087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.678 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:44.678 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # return 0 00:12:44.678 10:54:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:12:44.678 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.678 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:44.938 Malloc_0 00:12:44.938 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.938 10:54:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0 00:12:44.938 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_0 00:12:44.938 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:44.938 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:12:44.938 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:44.938 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:44.938 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:44.938 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.938 10:54:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:44.938 [ 00:12:44.938 { 00:12:44.938 "name": "Malloc_0", 00:12:44.938 "aliases": [ 00:12:44.938 "1f5b30e8-515e-45d1-acb9-e86352997174" 00:12:44.938 ], 00:12:44.938 "product_name": "Malloc disk", 00:12:44.938 "block_size": 512, 00:12:44.938 "num_blocks": 262144, 00:12:44.938 "uuid": "1f5b30e8-515e-45d1-acb9-e86352997174", 00:12:44.938 "assigned_rate_limits": { 00:12:44.938 "rw_ios_per_sec": 0, 00:12:44.938 "rw_mbytes_per_sec": 0, 00:12:44.938 "r_mbytes_per_sec": 0, 00:12:44.938 "w_mbytes_per_sec": 0 00:12:44.938 }, 00:12:44.938 "claimed": false, 00:12:44.938 "zoned": false, 00:12:44.938 "supported_io_types": { 00:12:44.938 "read": true, 00:12:44.938 "write": true, 00:12:44.938 "unmap": true, 00:12:44.938 "flush": true, 00:12:44.938 "reset": true, 00:12:44.938 "nvme_admin": false, 00:12:44.938 "nvme_io": false, 00:12:44.938 "nvme_io_md": false, 00:12:44.938 "write_zeroes": true, 00:12:44.938 "zcopy": true, 00:12:44.938 "get_zone_info": false, 00:12:44.938 "zone_management": false, 00:12:44.938 "zone_append": false, 00:12:44.938 "compare": false, 00:12:44.938 "compare_and_write": false, 00:12:44.938 "abort": true, 00:12:44.938 "seek_hole": false, 00:12:44.938 "seek_data": false, 00:12:44.938 "copy": true, 00:12:44.938 "nvme_iov_md": false 00:12:44.938 }, 00:12:44.938 "memory_domains": [ 00:12:44.938 { 00:12:44.938 "dma_device_id": "system", 00:12:44.938 "dma_device_type": 1 00:12:44.938 }, 00:12:44.938 { 00:12:44.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.938 "dma_device_type": 2 00:12:44.938 } 00:12:44.938 ], 00:12:44.938 "driver_specific": {} 00:12:44.938 } 00:12:44.938 ] 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:44.938 Null_1 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local bdev_name=Null_1 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@901 -- # local i 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:44.938 [ 00:12:44.938 { 00:12:44.938 "name": "Null_1", 00:12:44.938 "aliases": [ 00:12:44.938 "800d56c6-ce42-4ce4-9567-aa28c48cdaed" 00:12:44.938 ], 00:12:44.938 "product_name": "Null disk", 00:12:44.938 "block_size": 512, 00:12:44.938 "num_blocks": 262144, 00:12:44.938 "uuid": "800d56c6-ce42-4ce4-9567-aa28c48cdaed", 00:12:44.938 "assigned_rate_limits": { 00:12:44.938 "rw_ios_per_sec": 0, 00:12:44.938 "rw_mbytes_per_sec": 0, 00:12:44.938 "r_mbytes_per_sec": 0, 00:12:44.938 "w_mbytes_per_sec": 0 00:12:44.938 }, 00:12:44.938 "claimed": false, 00:12:44.938 "zoned": false, 00:12:44.938 "supported_io_types": { 00:12:44.938 "read": true, 00:12:44.938 "write": true, 00:12:44.938 "unmap": false, 00:12:44.938 "flush": false, 00:12:44.938 "reset": true, 00:12:44.938 "nvme_admin": false, 00:12:44.938 "nvme_io": false, 00:12:44.938 "nvme_io_md": false, 00:12:44.938 "write_zeroes": true, 00:12:44.938 "zcopy": false, 00:12:44.938 "get_zone_info": false, 00:12:44.938 "zone_management": false, 00:12:44.938 "zone_append": false, 00:12:44.938 "compare": false, 00:12:44.938 "compare_and_write": false, 00:12:44.938 "abort": true, 00:12:44.938 "seek_hole": false, 00:12:44.938 "seek_data": false, 00:12:44.938 "copy": false, 00:12:44.938 "nvme_iov_md": false 00:12:44.938 }, 00:12:44.938 "driver_specific": {} 00:12:44.938 } 00:12:44.938 ] 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- common/autotest_common.sh@907 -- # return 0 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:12:44.938 10:54:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:12:45.197 Running I/O for 60 seconds... 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 61321.65 245286.62 0.00 0.00 246784.00 0.00 0.00 ' 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=61321.65 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 61321 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=61321 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=15000 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 15000 -gt 1000 ']' 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 15000 Malloc_0 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.467 10:54:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 15000 IOPS Malloc_0 00:12:50.468 10:54:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:50.468 10:54:57 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.468 10:54:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:50.468 ************************************ 00:12:50.468 START TEST bdev_qos_iops 00:12:50.468 ************************************ 00:12:50.468 10:54:57 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1125 -- # run_qos_test 15000 IOPS Malloc_0 00:12:50.468 10:54:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=15000 00:12:50.468 10:54:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0 00:12:50.468 10:54:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0 00:12:50.468 10:54:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS 00:12:50.468 10:54:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:12:50.468 10:54:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:50.468 10:54:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:50.468 10:54:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:12:50.468 10:54:57 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 15001.06 60004.25 0.00 0.00 61200.00 0.00 0.00 ' 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']' 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}' 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=15001.06 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 15001 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=15001 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']' 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=13500 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=16500 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 15001 -lt 13500 ']' 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 15001 -gt 16500 ']' 00:12:55.737 00:12:55.737 real 0m5.254s 00:12:55.737 user 0m0.118s 00:12:55.737 sys 0m0.039s 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.737 10:55:02 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:12:55.737 ************************************ 00:12:55.737 END TEST bdev_qos_iops 00:12:55.737 ************************************ 00:12:55.737 10:55:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1 00:12:55.737 10:55:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:12:55.737 10:55:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:12:55.737 10:55:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result 00:12:55.737 10:55:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:55.737 10:55:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1 00:12:55.737 10:55:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 21106.65 84426.59 0.00 0.00 86016.00 0.00 0.00 ' 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=86016.00 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 86016 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=86016 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=8 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 8 -lt 2 ']' 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 8 Null_1 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 8 BANDWIDTH Null_1 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.007 10:55:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:01.007 ************************************ 00:13:01.007 START TEST bdev_qos_bw 00:13:01.007 ************************************ 00:13:01.007 10:55:07 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1125 -- # run_qos_test 8 BANDWIDTH Null_1 00:13:01.007 10:55:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=8 00:13:01.007 10:55:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:13:01.007 10:55:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1 00:13:01.007 10:55:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:13:01.007 10:55:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1 00:13:01.007 10:55:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:01.007 10:55:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:01.007 10:55:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1 00:13:01.007 10:55:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1 2048.05 8192.19 0.00 0.00 8320.00 0.00 0.00 ' 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=8320.00 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 8320 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=8320 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=8192 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=7372 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=9011 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8320 -lt 7372 ']' 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 8320 -gt 9011 ']' 00:13:06.319 00:13:06.319 real 0m5.260s 00:13:06.319 user 0m0.117s 00:13:06.319 sys 0m0.036s 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.319 10:55:13 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:13:06.319 ************************************ 00:13:06.319 END TEST bdev_qos_bw 00:13:06.319 ************************************ 00:13:06.319 10:55:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:06.319 10:55:13 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.319 10:55:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:06.319 10:55:13 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.320 10:55:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:06.320 10:55:13 blockdev_general.bdev_qos -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:06.320 10:55:13 blockdev_general.bdev_qos -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:06.320 10:55:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:06.320 ************************************ 00:13:06.320 START TEST bdev_qos_ro_bw 00:13:06.320 ************************************ 00:13:06.320 10:55:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1125 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:06.320 10:55:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2 00:13:06.320 10:55:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0 00:13:06.320 10:55:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0 00:13:06.320 10:55:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH 00:13:06.320 10:55:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0 00:13:06.320 10:55:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result 00:13:06.320 10:55:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:06.320 10:55:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0 00:13:06.320 10:55:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1 00:13:11.589 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0 511.43 2045.71 0.00 0.00 2052.00 0.00 0.00 ' 00:13:11.589 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']' 00:13:11.589 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:11.589 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}' 00:13:11.589 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2052.00 00:13:11.590 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2052 00:13:11.590 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2052 00:13:11.590 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:11.590 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048 00:13:11.590 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843 00:13:11.590 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252 00:13:11.590 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2052 -lt 1843 ']' 00:13:11.590 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2052 -gt 2252 ']' 00:13:11.590 00:13:11.590 real 0m5.170s 00:13:11.590 user 0m0.108s 00:13:11.590 sys 0m0.045s 00:13:11.590 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.590 10:55:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:13:11.590 ************************************ 00:13:11.590 END TEST bdev_qos_ro_bw 00:13:11.590 ************************************ 00:13:11.590 10:55:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:11.590 10:55:18 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.590 10:55:18 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:12.157 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.157 10:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1 00:13:12.157 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.157 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:12.157 00:13:12.157 Latency(us) 00:13:12.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.157 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:12.157 Malloc_0 : 26.78 20593.34 80.44 0.00 0.00 12311.09 2175.80 503316.48 00:13:12.157 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:12.157 Null_1 : 27.05 20547.77 80.26 0.00 0.00 12425.60 779.88 273468.62 00:13:12.157 =================================================================================================================== 00:13:12.157 Total : 41141.12 160.71 0.00 0.00 12368.58 779.88 503316.48 00:13:12.416 0 00:13:12.416 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.416 10:55:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 3533057 00:13:12.416 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # '[' -z 3533057 ']' 00:13:12.416 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # kill -0 3533057 00:13:12.416 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # uname 00:13:12.416 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:12.416 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3533057 00:13:12.416 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:12.416 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:12.416 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3533057' 00:13:12.416 killing process with pid 3533057 00:13:12.416 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@969 -- # kill 3533057 00:13:12.416 Received shutdown signal, test time was about 27.123681 seconds 00:13:12.416 00:13:12.416 Latency(us) 00:13:12.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.416 =================================================================================================================== 00:13:12.416 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.416 10:55:19 blockdev_general.bdev_qos -- common/autotest_common.sh@974 -- # wait 3533057 00:13:14.320 10:55:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT 00:13:14.320 00:13:14.320 real 0m30.486s 00:13:14.320 user 0m31.248s 00:13:14.320 sys 0m0.999s 00:13:14.320 10:55:21 blockdev_general.bdev_qos -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.320 10:55:21 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:14.320 ************************************ 00:13:14.320 END TEST bdev_qos 00:13:14.320 ************************************ 00:13:14.320 10:55:21 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:14.320 10:55:21 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:14.320 10:55:21 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.320 10:55:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:14.320 ************************************ 00:13:14.320 START TEST bdev_qd_sampling 00:13:14.320 ************************************ 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1125 -- # qd_sampling_test_suite '' 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=3538312 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 3538312' 00:13:14.320 Process bdev QD sampling period testing pid: 3538312 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 3538312 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # '[' -z 3538312 ']' 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.320 10:55:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:14.320 [2024-07-25 10:55:21.220905] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:14.320 [2024-07-25 10:55:21.221002] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3538312 ] 00:13:14.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.320 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:14.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.320 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:14.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.320 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:14.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.320 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:14.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.320 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:14.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.320 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:14.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.320 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:14.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.320 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:14.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.320 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:14.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.320 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:14.320 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:14.321 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:14.321 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:14.321 [2024-07-25 10:55:21.421469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:14.887 [2024-07-25 10:55:21.704446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.887 [2024-07-25 10:55:21.704452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.146 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.146 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # return 0 00:13:15.146 10:55:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:15.146 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.146 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:15.405 Malloc_QD 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_QD 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@901 -- # local i 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:15.405 [ 00:13:15.405 { 00:13:15.405 "name": "Malloc_QD", 00:13:15.405 "aliases": [ 00:13:15.405 "10639ebc-9c39-4d45-938f-fd2544983eef" 00:13:15.405 ], 00:13:15.405 "product_name": "Malloc disk", 00:13:15.405 "block_size": 512, 00:13:15.405 "num_blocks": 262144, 00:13:15.405 "uuid": "10639ebc-9c39-4d45-938f-fd2544983eef", 00:13:15.405 "assigned_rate_limits": { 00:13:15.405 "rw_ios_per_sec": 0, 00:13:15.405 "rw_mbytes_per_sec": 0, 00:13:15.405 "r_mbytes_per_sec": 0, 00:13:15.405 "w_mbytes_per_sec": 0 00:13:15.405 }, 00:13:15.405 "claimed": false, 00:13:15.405 "zoned": false, 00:13:15.405 "supported_io_types": { 00:13:15.405 "read": true, 00:13:15.405 "write": true, 00:13:15.405 "unmap": true, 00:13:15.405 "flush": true, 00:13:15.405 "reset": true, 00:13:15.405 "nvme_admin": false, 00:13:15.405 "nvme_io": false, 00:13:15.405 "nvme_io_md": false, 00:13:15.405 "write_zeroes": true, 00:13:15.405 "zcopy": true, 00:13:15.405 "get_zone_info": false, 00:13:15.405 "zone_management": false, 00:13:15.405 "zone_append": false, 00:13:15.405 "compare": false, 00:13:15.405 "compare_and_write": false, 00:13:15.405 "abort": true, 00:13:15.405 "seek_hole": false, 00:13:15.405 "seek_data": false, 00:13:15.405 "copy": true, 00:13:15.405 "nvme_iov_md": false 00:13:15.405 }, 00:13:15.405 "memory_domains": [ 00:13:15.405 { 00:13:15.405 "dma_device_id": "system", 00:13:15.405 "dma_device_type": 1 00:13:15.405 }, 00:13:15.405 { 00:13:15.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.405 "dma_device_type": 2 00:13:15.405 } 00:13:15.405 ], 00:13:15.405 "driver_specific": {} 00:13:15.405 } 00:13:15.405 ] 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@907 -- # return 0 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2 00:13:15.405 10:55:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:15.664 Running I/O for 5 seconds... 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{ 00:13:17.566 "tick_rate": 2500000000, 00:13:17.566 "ticks": 14243689930726662, 00:13:17.566 "bdevs": [ 00:13:17.566 { 00:13:17.566 "name": "Malloc_QD", 00:13:17.566 "bytes_read": 759214592, 00:13:17.566 "num_read_ops": 185348, 00:13:17.566 "bytes_written": 0, 00:13:17.566 "num_write_ops": 0, 00:13:17.566 "bytes_unmapped": 0, 00:13:17.566 "num_unmap_ops": 0, 00:13:17.566 "bytes_copied": 0, 00:13:17.566 "num_copy_ops": 0, 00:13:17.566 "read_latency_ticks": 2438220308568, 00:13:17.566 "max_read_latency_ticks": 13923368, 00:13:17.566 "min_read_latency_ticks": 534448, 00:13:17.566 "write_latency_ticks": 0, 00:13:17.566 "max_write_latency_ticks": 0, 00:13:17.566 "min_write_latency_ticks": 0, 00:13:17.566 "unmap_latency_ticks": 0, 00:13:17.566 "max_unmap_latency_ticks": 0, 00:13:17.566 "min_unmap_latency_ticks": 0, 00:13:17.566 "copy_latency_ticks": 0, 00:13:17.566 "max_copy_latency_ticks": 0, 00:13:17.566 "min_copy_latency_ticks": 0, 00:13:17.566 "io_error": {}, 00:13:17.566 "queue_depth_polling_period": 10, 00:13:17.566 "queue_depth": 512, 00:13:17.566 "io_time": 30, 00:13:17.566 "weighted_io_time": 15360 00:13:17.566 } 00:13:17.566 ] 00:13:17.566 }' 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10 00:13:17.566 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']' 00:13:17.567 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']' 00:13:17.567 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:17.567 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.567 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:17.567 00:13:17.567 Latency(us) 00:13:17.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.567 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:17.567 Malloc_QD : 1.98 48354.41 188.88 0.00 0.00 5280.33 1310.72 5583.67 00:13:17.567 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:17.567 Malloc_QD : 1.98 48818.63 190.70 0.00 0.00 5230.66 963.38 5426.38 00:13:17.567 =================================================================================================================== 00:13:17.567 Total : 97173.04 379.58 0.00 0.00 5255.36 963.38 5583.67 00:13:17.825 0 00:13:17.825 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.825 10:55:24 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 3538312 00:13:17.825 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # '[' -z 3538312 ']' 00:13:17.825 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # kill -0 3538312 00:13:17.825 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # uname 00:13:17.825 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:17.825 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3538312 00:13:17.825 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:17.825 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:17.825 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3538312' 00:13:17.825 killing process with pid 3538312 00:13:17.825 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@969 -- # kill 3538312 00:13:17.825 Received shutdown signal, test time was about 2.205373 seconds 00:13:17.825 00:13:17.825 Latency(us) 00:13:17.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.825 =================================================================================================================== 00:13:17.825 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:17.825 10:55:24 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@974 -- # wait 3538312 00:13:19.728 10:55:26 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT 00:13:19.728 00:13:19.728 real 0m5.366s 00:13:19.728 user 0m9.554s 00:13:19.728 sys 0m0.552s 00:13:19.728 10:55:26 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.728 10:55:26 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:19.728 ************************************ 00:13:19.728 END TEST bdev_qd_sampling 00:13:19.728 ************************************ 00:13:19.728 10:55:26 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite '' 00:13:19.728 10:55:26 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:19.728 10:55:26 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.728 10:55:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:19.728 ************************************ 00:13:19.728 START TEST bdev_error 00:13:19.728 ************************************ 00:13:19.728 10:55:26 blockdev_general.bdev_error -- common/autotest_common.sh@1125 -- # error_test_suite '' 00:13:19.728 10:55:26 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1 00:13:19.728 10:55:26 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2 00:13:19.728 10:55:26 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1 00:13:19.728 10:55:26 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=3539248 00:13:19.728 10:55:26 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 3539248' 00:13:19.728 Process error testing pid: 3539248 00:13:19.728 10:55:26 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:19.728 10:55:26 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 3539248 00:13:19.728 10:55:26 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 3539248 ']' 00:13:19.728 10:55:26 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.728 10:55:26 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:19.728 10:55:26 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.728 10:55:26 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:19.728 10:55:26 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:19.728 [2024-07-25 10:55:26.689747] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:19.728 [2024-07-25 10:55:26.689864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3539248 ] 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:19.728 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:19.728 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:19.987 [2024-07-25 10:55:26.903448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.246 [2024-07-25 10:55:27.187046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:13:20.814 10:55:27 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:20.814 Dev_1 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.814 10:55:27 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.814 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:20.814 [ 00:13:20.814 { 00:13:20.814 "name": "Dev_1", 00:13:20.814 "aliases": [ 00:13:20.814 "812f7c34-cdea-4e01-a0ab-69e9420c6f20" 00:13:20.814 ], 00:13:20.814 "product_name": "Malloc disk", 00:13:20.814 "block_size": 512, 00:13:20.814 "num_blocks": 262144, 00:13:20.814 "uuid": "812f7c34-cdea-4e01-a0ab-69e9420c6f20", 00:13:20.814 "assigned_rate_limits": { 00:13:20.814 "rw_ios_per_sec": 0, 00:13:20.814 "rw_mbytes_per_sec": 0, 00:13:20.814 "r_mbytes_per_sec": 0, 00:13:20.814 "w_mbytes_per_sec": 0 00:13:20.814 }, 00:13:20.814 "claimed": false, 00:13:20.814 "zoned": false, 00:13:20.814 "supported_io_types": { 00:13:20.814 "read": true, 00:13:21.073 "write": true, 00:13:21.073 "unmap": true, 00:13:21.073 "flush": true, 00:13:21.073 "reset": true, 00:13:21.073 "nvme_admin": false, 00:13:21.073 "nvme_io": false, 00:13:21.073 "nvme_io_md": false, 00:13:21.073 "write_zeroes": true, 00:13:21.073 "zcopy": true, 00:13:21.073 "get_zone_info": false, 00:13:21.073 "zone_management": false, 00:13:21.073 "zone_append": false, 00:13:21.073 "compare": false, 00:13:21.073 "compare_and_write": false, 00:13:21.073 "abort": true, 00:13:21.073 "seek_hole": false, 00:13:21.073 "seek_data": false, 00:13:21.073 "copy": true, 00:13:21.073 "nvme_iov_md": false 00:13:21.073 }, 00:13:21.073 "memory_domains": [ 00:13:21.073 { 00:13:21.073 "dma_device_id": "system", 00:13:21.073 "dma_device_type": 1 00:13:21.073 }, 00:13:21.073 { 00:13:21.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.073 "dma_device_type": 2 00:13:21.073 } 00:13:21.073 ], 00:13:21.073 "driver_specific": {} 00:13:21.073 } 00:13:21.073 ] 00:13:21.073 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.073 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:13:21.073 10:55:27 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1 00:13:21.073 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.073 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.073 true 00:13:21.073 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.073 10:55:27 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:21.073 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.073 10:55:27 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.073 Dev_2 00:13:21.073 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.073 10:55:28 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2 00:13:21.073 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:13:21.073 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:21.073 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:13:21.073 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:21.073 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:21.073 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:21.073 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.073 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.074 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.074 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:21.074 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.074 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.074 [ 00:13:21.074 { 00:13:21.074 "name": "Dev_2", 00:13:21.074 "aliases": [ 00:13:21.074 "3a9d468a-5bce-4f57-9271-bdd337ab7c7c" 00:13:21.074 ], 00:13:21.074 "product_name": "Malloc disk", 00:13:21.333 "block_size": 512, 00:13:21.333 "num_blocks": 262144, 00:13:21.333 "uuid": "3a9d468a-5bce-4f57-9271-bdd337ab7c7c", 00:13:21.333 "assigned_rate_limits": { 00:13:21.333 "rw_ios_per_sec": 0, 00:13:21.333 "rw_mbytes_per_sec": 0, 00:13:21.333 "r_mbytes_per_sec": 0, 00:13:21.333 "w_mbytes_per_sec": 0 00:13:21.333 }, 00:13:21.333 "claimed": false, 00:13:21.333 "zoned": false, 00:13:21.333 "supported_io_types": { 00:13:21.333 "read": true, 00:13:21.333 "write": true, 00:13:21.333 "unmap": true, 00:13:21.333 "flush": true, 00:13:21.333 "reset": true, 00:13:21.333 "nvme_admin": false, 00:13:21.333 "nvme_io": false, 00:13:21.333 "nvme_io_md": false, 00:13:21.333 "write_zeroes": true, 00:13:21.333 "zcopy": true, 00:13:21.333 "get_zone_info": false, 00:13:21.333 "zone_management": false, 00:13:21.333 "zone_append": false, 00:13:21.333 "compare": false, 00:13:21.333 "compare_and_write": false, 00:13:21.333 "abort": true, 00:13:21.333 "seek_hole": false, 00:13:21.333 "seek_data": false, 00:13:21.333 "copy": true, 00:13:21.333 "nvme_iov_md": false 00:13:21.333 }, 00:13:21.333 "memory_domains": [ 00:13:21.333 { 00:13:21.333 "dma_device_id": "system", 00:13:21.333 "dma_device_type": 1 00:13:21.333 }, 00:13:21.333 { 00:13:21.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.333 "dma_device_type": 2 00:13:21.333 } 00:13:21.333 ], 00:13:21.333 "driver_specific": {} 00:13:21.333 } 00:13:21.333 ] 00:13:21.333 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.333 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:13:21.333 10:55:28 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:21.333 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.333 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:21.333 10:55:28 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.333 10:55:28 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1 00:13:21.333 10:55:28 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:21.333 Running I/O for 5 seconds... 00:13:22.269 10:55:29 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 3539248 00:13:22.269 10:55:29 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 3539248' 00:13:22.269 Process is existed as continue on error is set. Pid: 3539248 00:13:22.269 10:55:29 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:22.269 10:55:29 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.269 10:55:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:22.269 10:55:29 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.269 10:55:29 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:22.269 10:55:29 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.269 10:55:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:22.269 Timeout while waiting for response: 00:13:22.269 00:13:22.269 00:13:22.269 10:55:29 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.269 10:55:29 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5 00:13:26.525 00:13:26.525 Latency(us) 00:13:26.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.525 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:26.525 EE_Dev_1 : 0.90 36207.84 141.44 5.54 0.00 438.56 140.90 730.73 00:13:26.525 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:26.525 Dev_2 : 5.00 75723.86 295.80 0.00 0.00 207.87 68.40 151833.80 00:13:26.525 =================================================================================================================== 00:13:26.525 Total : 111931.70 437.23 5.54 0.00 226.18 68.40 151833.80 00:13:27.463 10:55:34 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 3539248 00:13:27.463 10:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # '[' -z 3539248 ']' 00:13:27.463 10:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # kill -0 3539248 00:13:27.463 10:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # uname 00:13:27.463 10:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.463 10:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3539248 00:13:27.463 10:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:27.463 10:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:27.463 10:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3539248' 00:13:27.463 killing process with pid 3539248 00:13:27.463 10:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@969 -- # kill 3539248 00:13:27.463 Received shutdown signal, test time was about 5.000000 seconds 00:13:27.463 00:13:27.463 Latency(us) 00:13:27.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.463 =================================================================================================================== 00:13:27.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:27.463 10:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@974 -- # wait 3539248 00:13:29.998 10:55:36 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=3540872 00:13:29.998 10:55:36 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 3540872' 00:13:29.998 Process error testing pid: 3540872 00:13:29.998 10:55:36 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:29.998 10:55:36 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 3540872 00:13:29.998 10:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # '[' -z 3540872 ']' 00:13:29.998 10:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.998 10:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.998 10:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.998 10:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.998 10:55:36 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:29.998 [2024-07-25 10:55:36.738106] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:29.998 [2024-07-25 10:55:36.738238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3540872 ] 00:13:29.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.998 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:29.998 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.998 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:29.999 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:29.999 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:29.999 [2024-07-25 10:55:36.955168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.258 [2024-07-25 10:55:37.237764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # return 0 00:13:30.827 10:55:37 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:30.827 Dev_1 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.827 10:55:37 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_1 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.827 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:31.086 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.086 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:31.086 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.086 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:31.086 [ 00:13:31.086 { 00:13:31.086 "name": "Dev_1", 00:13:31.086 "aliases": [ 00:13:31.086 "b77c8fe9-7e9c-4b09-a266-cf0b40e5a105" 00:13:31.086 ], 00:13:31.086 "product_name": "Malloc disk", 00:13:31.086 "block_size": 512, 00:13:31.086 "num_blocks": 262144, 00:13:31.086 "uuid": "b77c8fe9-7e9c-4b09-a266-cf0b40e5a105", 00:13:31.086 "assigned_rate_limits": { 00:13:31.086 "rw_ios_per_sec": 0, 00:13:31.086 "rw_mbytes_per_sec": 0, 00:13:31.086 "r_mbytes_per_sec": 0, 00:13:31.086 "w_mbytes_per_sec": 0 00:13:31.086 }, 00:13:31.086 "claimed": false, 00:13:31.086 "zoned": false, 00:13:31.086 "supported_io_types": { 00:13:31.086 "read": true, 00:13:31.086 "write": true, 00:13:31.086 "unmap": true, 00:13:31.086 "flush": true, 00:13:31.086 "reset": true, 00:13:31.086 "nvme_admin": false, 00:13:31.086 "nvme_io": false, 00:13:31.086 "nvme_io_md": false, 00:13:31.086 "write_zeroes": true, 00:13:31.086 "zcopy": true, 00:13:31.086 "get_zone_info": false, 00:13:31.086 "zone_management": false, 00:13:31.086 "zone_append": false, 00:13:31.086 "compare": false, 00:13:31.086 "compare_and_write": false, 00:13:31.086 "abort": true, 00:13:31.086 "seek_hole": false, 00:13:31.086 "seek_data": false, 00:13:31.086 "copy": true, 00:13:31.086 "nvme_iov_md": false 00:13:31.086 }, 00:13:31.086 "memory_domains": [ 00:13:31.086 { 00:13:31.086 "dma_device_id": "system", 00:13:31.086 "dma_device_type": 1 00:13:31.086 }, 00:13:31.086 { 00:13:31.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.086 "dma_device_type": 2 00:13:31.086 } 00:13:31.086 ], 00:13:31.086 "driver_specific": {} 00:13:31.086 } 00:13:31.086 ] 00:13:31.086 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.086 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:13:31.086 10:55:37 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1 00:13:31.086 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.086 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:31.086 true 00:13:31.086 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.086 10:55:37 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:31.087 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.087 10:55:37 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:31.087 Dev_2 00:13:31.087 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.087 10:55:38 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2 00:13:31.087 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local bdev_name=Dev_2 00:13:31.087 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:31.087 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@901 -- # local i 00:13:31.087 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:31.087 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:31.087 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:31.087 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.087 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:31.346 [ 00:13:31.346 { 00:13:31.346 "name": "Dev_2", 00:13:31.346 "aliases": [ 00:13:31.346 "d894375a-60ef-4d1a-a49b-d118d9e9de4b" 00:13:31.346 ], 00:13:31.346 "product_name": "Malloc disk", 00:13:31.346 "block_size": 512, 00:13:31.346 "num_blocks": 262144, 00:13:31.346 "uuid": "d894375a-60ef-4d1a-a49b-d118d9e9de4b", 00:13:31.346 "assigned_rate_limits": { 00:13:31.346 "rw_ios_per_sec": 0, 00:13:31.346 "rw_mbytes_per_sec": 0, 00:13:31.346 "r_mbytes_per_sec": 0, 00:13:31.346 "w_mbytes_per_sec": 0 00:13:31.346 }, 00:13:31.346 "claimed": false, 00:13:31.346 "zoned": false, 00:13:31.346 "supported_io_types": { 00:13:31.346 "read": true, 00:13:31.346 "write": true, 00:13:31.346 "unmap": true, 00:13:31.346 "flush": true, 00:13:31.346 "reset": true, 00:13:31.346 "nvme_admin": false, 00:13:31.346 "nvme_io": false, 00:13:31.346 "nvme_io_md": false, 00:13:31.346 "write_zeroes": true, 00:13:31.346 "zcopy": true, 00:13:31.346 "get_zone_info": false, 00:13:31.346 "zone_management": false, 00:13:31.346 "zone_append": false, 00:13:31.346 "compare": false, 00:13:31.346 "compare_and_write": false, 00:13:31.346 "abort": true, 00:13:31.346 "seek_hole": false, 00:13:31.346 "seek_data": false, 00:13:31.346 "copy": true, 00:13:31.346 "nvme_iov_md": false 00:13:31.346 }, 00:13:31.346 "memory_domains": [ 00:13:31.346 { 00:13:31.346 "dma_device_id": "system", 00:13:31.346 "dma_device_type": 1 00:13:31.346 }, 00:13:31.346 { 00:13:31.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.346 "dma_device_type": 2 00:13:31.346 } 00:13:31.346 ], 00:13:31.346 "driver_specific": {} 00:13:31.346 } 00:13:31.346 ] 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@907 -- # return 0 00:13:31.346 10:55:38 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.346 10:55:38 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 3540872 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # local es=0 00:13:31.346 10:55:38 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3540872 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@638 -- # local arg=wait 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # type -t wait 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:31.346 10:55:38 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # wait 3540872 00:13:31.346 Running I/O for 5 seconds... 00:13:31.346 task offset: 252016 on job bdev=EE_Dev_1 fails 00:13:31.346 00:13:31.346 Latency(us) 00:13:31.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.346 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:31.346 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:31.346 EE_Dev_1 : 0.00 26993.87 105.44 6134.97 0.00 398.42 140.90 714.34 00:13:31.346 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:31.346 Dev_2 : 0.00 17738.36 69.29 0.00 0.00 656.69 152.37 1205.86 00:13:31.346 =================================================================================================================== 00:13:31.346 Total : 44732.22 174.74 6134.97 0.00 538.50 140.90 1205.86 00:13:31.346 [2024-07-25 10:55:38.361374] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:31.346 request: 00:13:31.346 { 00:13:31.346 "method": "perform_tests", 00:13:31.346 "req_id": 1 00:13:31.346 } 00:13:31.346 Got JSON-RPC error response 00:13:31.346 response: 00:13:31.346 { 00:13:31.346 "code": -32603, 00:13:31.346 "message": "bdevperf failed with error Operation not permitted" 00:13:31.346 } 00:13:33.878 10:55:40 blockdev_general.bdev_error -- common/autotest_common.sh@653 -- # es=255 00:13:33.878 10:55:40 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:33.878 10:55:40 blockdev_general.bdev_error -- common/autotest_common.sh@662 -- # es=127 00:13:33.878 10:55:40 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # case "$es" in 00:13:33.878 10:55:40 blockdev_general.bdev_error -- common/autotest_common.sh@670 -- # es=1 00:13:33.878 10:55:40 blockdev_general.bdev_error -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:33.878 00:13:33.878 real 0m14.145s 00:13:33.878 user 0m14.156s 00:13:33.878 sys 0m1.183s 00:13:33.878 10:55:40 blockdev_general.bdev_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.878 10:55:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:33.878 ************************************ 00:13:33.878 END TEST bdev_error 00:13:33.878 ************************************ 00:13:33.878 10:55:40 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite '' 00:13:33.878 10:55:40 blockdev_general -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:33.878 10:55:40 blockdev_general -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.878 10:55:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:33.878 ************************************ 00:13:33.878 START TEST bdev_stat 00:13:33.878 ************************************ 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- common/autotest_common.sh@1125 -- # stat_test_suite '' 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=3541650 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 3541650' 00:13:33.878 Process Bdev IO statistics testing pid: 3541650 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 3541650 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # '[' -z 3541650 ']' 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:33.878 10:55:40 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:33.878 [2024-07-25 10:55:40.919967] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:33.878 [2024-07-25 10:55:40.920089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3541650 ] 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:34.137 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:34.137 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:34.137 [2024-07-25 10:55:41.148133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:34.396 [2024-07-25 10:55:41.427994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.396 [2024-07-25 10:55:41.428000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.963 10:55:41 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:34.964 10:55:41 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # return 0 00:13:34.964 10:55:41 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:34.964 10:55:41 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.964 10:55:41 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:35.223 Malloc_STAT 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local bdev_name=Malloc_STAT 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@901 -- # local i 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:35.223 [ 00:13:35.223 { 00:13:35.223 "name": "Malloc_STAT", 00:13:35.223 "aliases": [ 00:13:35.223 "a2c2d067-1cf7-445d-8043-1662dd0a04ca" 00:13:35.223 ], 00:13:35.223 "product_name": "Malloc disk", 00:13:35.223 "block_size": 512, 00:13:35.223 "num_blocks": 262144, 00:13:35.223 "uuid": "a2c2d067-1cf7-445d-8043-1662dd0a04ca", 00:13:35.223 "assigned_rate_limits": { 00:13:35.223 "rw_ios_per_sec": 0, 00:13:35.223 "rw_mbytes_per_sec": 0, 00:13:35.223 "r_mbytes_per_sec": 0, 00:13:35.223 "w_mbytes_per_sec": 0 00:13:35.223 }, 00:13:35.223 "claimed": false, 00:13:35.223 "zoned": false, 00:13:35.223 "supported_io_types": { 00:13:35.223 "read": true, 00:13:35.223 "write": true, 00:13:35.223 "unmap": true, 00:13:35.223 "flush": true, 00:13:35.223 "reset": true, 00:13:35.223 "nvme_admin": false, 00:13:35.223 "nvme_io": false, 00:13:35.223 "nvme_io_md": false, 00:13:35.223 "write_zeroes": true, 00:13:35.223 "zcopy": true, 00:13:35.223 "get_zone_info": false, 00:13:35.223 "zone_management": false, 00:13:35.223 "zone_append": false, 00:13:35.223 "compare": false, 00:13:35.223 "compare_and_write": false, 00:13:35.223 "abort": true, 00:13:35.223 "seek_hole": false, 00:13:35.223 "seek_data": false, 00:13:35.223 "copy": true, 00:13:35.223 "nvme_iov_md": false 00:13:35.223 }, 00:13:35.223 "memory_domains": [ 00:13:35.223 { 00:13:35.223 "dma_device_id": "system", 00:13:35.223 "dma_device_type": 1 00:13:35.223 }, 00:13:35.223 { 00:13:35.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.223 "dma_device_type": 2 00:13:35.223 } 00:13:35.223 ], 00:13:35.223 "driver_specific": {} 00:13:35.223 } 00:13:35.223 ] 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@907 -- # return 0 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2 00:13:35.223 10:55:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:35.223 Running I/O for 10 seconds... 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{ 00:13:37.126 "tick_rate": 2500000000, 00:13:37.126 "ticks": 14243739099198788, 00:13:37.126 "bdevs": [ 00:13:37.126 { 00:13:37.126 "name": "Malloc_STAT", 00:13:37.126 "bytes_read": 743485952, 00:13:37.126 "num_read_ops": 181508, 00:13:37.126 "bytes_written": 0, 00:13:37.126 "num_write_ops": 0, 00:13:37.126 "bytes_unmapped": 0, 00:13:37.126 "num_unmap_ops": 0, 00:13:37.126 "bytes_copied": 0, 00:13:37.126 "num_copy_ops": 0, 00:13:37.126 "read_latency_ticks": 2419728971306, 00:13:37.126 "max_read_latency_ticks": 14138280, 00:13:37.126 "min_read_latency_ticks": 455432, 00:13:37.126 "write_latency_ticks": 0, 00:13:37.126 "max_write_latency_ticks": 0, 00:13:37.126 "min_write_latency_ticks": 0, 00:13:37.126 "unmap_latency_ticks": 0, 00:13:37.126 "max_unmap_latency_ticks": 0, 00:13:37.126 "min_unmap_latency_ticks": 0, 00:13:37.126 "copy_latency_ticks": 0, 00:13:37.126 "max_copy_latency_ticks": 0, 00:13:37.126 "min_copy_latency_ticks": 0, 00:13:37.126 "io_error": {} 00:13:37.126 } 00:13:37.126 ] 00:13:37.126 }' 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops' 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=181508 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.126 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{ 00:13:37.385 "tick_rate": 2500000000, 00:13:37.385 "ticks": 14243739261098302, 00:13:37.385 "name": "Malloc_STAT", 00:13:37.385 "channels": [ 00:13:37.385 { 00:13:37.385 "thread_id": 2, 00:13:37.385 "bytes_read": 382730240, 00:13:37.385 "num_read_ops": 93440, 00:13:37.385 "bytes_written": 0, 00:13:37.385 "num_write_ops": 0, 00:13:37.385 "bytes_unmapped": 0, 00:13:37.385 "num_unmap_ops": 0, 00:13:37.385 "bytes_copied": 0, 00:13:37.385 "num_copy_ops": 0, 00:13:37.385 "read_latency_ticks": 1250412112936, 00:13:37.385 "max_read_latency_ticks": 14138280, 00:13:37.385 "min_read_latency_ticks": 9943118, 00:13:37.385 "write_latency_ticks": 0, 00:13:37.385 "max_write_latency_ticks": 0, 00:13:37.385 "min_write_latency_ticks": 0, 00:13:37.385 "unmap_latency_ticks": 0, 00:13:37.385 "max_unmap_latency_ticks": 0, 00:13:37.385 "min_unmap_latency_ticks": 0, 00:13:37.385 "copy_latency_ticks": 0, 00:13:37.385 "max_copy_latency_ticks": 0, 00:13:37.385 "min_copy_latency_ticks": 0 00:13:37.385 }, 00:13:37.385 { 00:13:37.385 "thread_id": 3, 00:13:37.385 "bytes_read": 385875968, 00:13:37.385 "num_read_ops": 94208, 00:13:37.385 "bytes_written": 0, 00:13:37.385 "num_write_ops": 0, 00:13:37.385 "bytes_unmapped": 0, 00:13:37.385 "num_unmap_ops": 0, 00:13:37.385 "bytes_copied": 0, 00:13:37.385 "num_copy_ops": 0, 00:13:37.385 "read_latency_ticks": 1251379309866, 00:13:37.385 "max_read_latency_ticks": 13863636, 00:13:37.385 "min_read_latency_ticks": 10096360, 00:13:37.385 "write_latency_ticks": 0, 00:13:37.385 "max_write_latency_ticks": 0, 00:13:37.385 "min_write_latency_ticks": 0, 00:13:37.385 "unmap_latency_ticks": 0, 00:13:37.385 "max_unmap_latency_ticks": 0, 00:13:37.385 "min_unmap_latency_ticks": 0, 00:13:37.385 "copy_latency_ticks": 0, 00:13:37.385 "max_copy_latency_ticks": 0, 00:13:37.385 "min_copy_latency_ticks": 0 00:13:37.385 } 00:13:37.385 ] 00:13:37.385 }' 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops' 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=93440 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=93440 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops' 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=94208 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=187648 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{ 00:13:37.385 "tick_rate": 2500000000, 00:13:37.385 "ticks": 14243739557535930, 00:13:37.385 "bdevs": [ 00:13:37.385 { 00:13:37.385 "name": "Malloc_STAT", 00:13:37.385 "bytes_read": 815837696, 00:13:37.385 "num_read_ops": 199172, 00:13:37.385 "bytes_written": 0, 00:13:37.385 "num_write_ops": 0, 00:13:37.385 "bytes_unmapped": 0, 00:13:37.385 "num_unmap_ops": 0, 00:13:37.385 "bytes_copied": 0, 00:13:37.385 "num_copy_ops": 0, 00:13:37.385 "read_latency_ticks": 2655750665092, 00:13:37.385 "max_read_latency_ticks": 14255554, 00:13:37.385 "min_read_latency_ticks": 455432, 00:13:37.385 "write_latency_ticks": 0, 00:13:37.385 "max_write_latency_ticks": 0, 00:13:37.385 "min_write_latency_ticks": 0, 00:13:37.385 "unmap_latency_ticks": 0, 00:13:37.385 "max_unmap_latency_ticks": 0, 00:13:37.385 "min_unmap_latency_ticks": 0, 00:13:37.385 "copy_latency_ticks": 0, 00:13:37.385 "max_copy_latency_ticks": 0, 00:13:37.385 "min_copy_latency_ticks": 0, 00:13:37.385 "io_error": {} 00:13:37.385 } 00:13:37.385 ] 00:13:37.385 }' 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops' 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=199172 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 187648 -lt 181508 ']' 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 187648 -gt 199172 ']' 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.385 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:37.385 00:13:37.385 Latency(us) 00:13:37.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.385 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:37.385 Malloc_STAT : 2.15 47722.54 186.42 0.00 0.00 5350.96 1310.72 5714.74 00:13:37.385 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:37.385 Malloc_STAT : 2.15 48147.23 188.08 0.00 0.00 5304.00 976.49 5557.45 00:13:37.385 =================================================================================================================== 00:13:37.385 Total : 95869.77 374.49 0.00 0.00 5327.36 976.49 5714.74 00:13:37.644 0 00:13:37.644 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.644 10:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 3541650 00:13:37.644 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # '[' -z 3541650 ']' 00:13:37.644 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # kill -0 3541650 00:13:37.644 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # uname 00:13:37.644 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.644 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3541650 00:13:37.644 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.644 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.644 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3541650' 00:13:37.644 killing process with pid 3541650 00:13:37.644 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@969 -- # kill 3541650 00:13:37.644 Received shutdown signal, test time was about 2.371926 seconds 00:13:37.644 00:13:37.644 Latency(us) 00:13:37.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.644 =================================================================================================================== 00:13:37.644 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:37.644 10:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@974 -- # wait 3541650 00:13:39.551 10:55:46 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT 00:13:39.551 00:13:39.551 real 0m5.493s 00:13:39.551 user 0m10.015s 00:13:39.551 sys 0m0.607s 00:13:39.551 10:55:46 blockdev_general.bdev_stat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.551 10:55:46 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:39.551 ************************************ 00:13:39.551 END TEST bdev_stat 00:13:39.551 ************************************ 00:13:39.551 10:55:46 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]] 00:13:39.551 10:55:46 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]] 00:13:39.551 10:55:46 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:39.551 10:55:46 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup 00:13:39.551 10:55:46 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:13:39.551 10:55:46 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:13:39.551 10:55:46 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:13:39.551 10:55:46 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:13:39.551 10:55:46 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:13:39.551 10:55:46 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:13:39.551 00:13:39.551 real 2m45.244s 00:13:39.551 user 8m28.312s 00:13:39.551 sys 0m25.445s 00:13:39.551 10:55:46 blockdev_general -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:39.551 10:55:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:39.551 ************************************ 00:13:39.551 END TEST blockdev_general 00:13:39.551 ************************************ 00:13:39.551 10:55:46 -- spdk/autotest.sh@194 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:13:39.551 10:55:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:39.551 10:55:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.551 10:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:39.551 ************************************ 00:13:39.551 START TEST bdev_raid 00:13:39.551 ************************************ 00:13:39.551 10:55:46 bdev_raid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:13:39.551 * Looking for test storage... 00:13:39.551 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:13:39.551 10:55:46 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:13:39.551 10:55:46 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:13:39.551 10:55:46 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:39.551 10:55:46 bdev_raid -- bdev/bdev_raid.sh@927 -- # mkdir -p /raidtest 00:13:39.551 10:55:46 bdev_raid -- bdev/bdev_raid.sh@928 -- # trap 'cleanup; exit 1' EXIT 00:13:39.551 10:55:46 bdev_raid -- bdev/bdev_raid.sh@930 -- # base_blocklen=512 00:13:39.551 10:55:46 bdev_raid -- bdev/bdev_raid.sh@932 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:13:39.551 10:55:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:39.551 10:55:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:39.551 10:55:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.551 ************************************ 00:13:39.551 START TEST raid0_resize_superblock_test 00:13:39.551 ************************************ 00:13:39.551 10:55:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:13:39.551 10:55:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=0 00:13:39.551 10:55:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=3542783 00:13:39.552 10:55:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 3542783' 00:13:39.552 Process raid pid: 3542783 00:13:39.552 10:55:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 3542783 /var/tmp/spdk-raid.sock 00:13:39.552 10:55:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3542783 ']' 00:13:39.552 10:55:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:39.552 10:55:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:39.552 10:55:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:39.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:39.552 10:55:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:39.552 10:55:46 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.552 10:55:46 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:39.811 [2024-07-25 10:55:46.714306] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:39.811 [2024-07-25 10:55:46.714418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:39.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:39.811 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:40.070 [2024-07-25 10:55:46.943952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.328 [2024-07-25 10:55:47.222686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.587 [2024-07-25 10:55:47.554516] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.587 [2024-07-25 10:55:47.554550] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.846 10:55:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:40.846 10:55:47 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:40.846 10:55:47 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:13:41.782 malloc0 00:13:41.782 10:55:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:42.073 [2024-07-25 10:55:48.971348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:42.073 [2024-07-25 10:55:48.971415] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.073 [2024-07-25 10:55:48.971449] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:13:42.073 [2024-07-25 10:55:48.971468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.073 [2024-07-25 10:55:48.974225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.073 [2024-07-25 10:55:48.974264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:42.073 pt0 00:13:42.073 10:55:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:13:42.332 4c31d413-dbbe-4f83-b443-45d341a104b5 00:13:42.332 10:55:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:13:42.590 01b87254-8876-462e-980e-247386c46760 00:13:42.590 10:55:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:13:42.849 f96746e8-e2e3-4d5d-96c9-d9c0a16e7397 00:13:42.849 10:55:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:13:42.849 10:55:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@884 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 0 -z 64 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:13:43.107 [2024-07-25 10:55:50.211711] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 01b87254-8876-462e-980e-247386c46760 is claimed 00:13:43.107 [2024-07-25 10:55:50.211857] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev f96746e8-e2e3-4d5d-96c9-d9c0a16e7397 is claimed 00:13:43.107 [2024-07-25 10:55:50.212035] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:43.107 [2024-07-25 10:55:50.212055] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:13:43.107 [2024-07-25 10:55:50.212414] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:43.107 [2024-07-25 10:55:50.212693] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:43.107 [2024-07-25 10:55:50.212710] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:43.107 [2024-07-25 10:55:50.212954] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.365 10:55:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:43.365 10:55:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:13:43.931 10:55:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:13:43.931 10:55:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:43.931 10:55:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:13:44.189 10:55:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:13:44.189 10:55:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:44.189 10:55:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:44.189 10:55:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:44.189 10:55:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:13:44.757 [2024-07-25 10:55:51.748093] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.757 10:55:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:44.757 10:55:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:44.757 10:55:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 245760 == 245760 )) 00:13:44.757 10:55:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:13:45.016 [2024-07-25 10:55:51.976649] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:45.016 [2024-07-25 10:55:51.976687] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '01b87254-8876-462e-980e-247386c46760' was resized: old size 131072, new size 204800 00:13:45.016 10:55:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:13:45.016 [2024-07-25 10:55:52.133019] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:45.016 [2024-07-25 10:55:52.133051] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f96746e8-e2e3-4d5d-96c9-d9c0a16e7397' was resized: old size 131072, new size 204800 00:13:45.016 [2024-07-25 10:55:52.133084] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:13:45.276 10:55:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:45.276 10:55:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:13:45.535 10:55:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:13:45.794 10:55:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:45.794 10:55:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:13:46.052 10:55:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:13:46.052 10:55:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:46.052 10:55:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:46.052 10:55:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:46.052 10:55:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # jq '.[].num_blocks' 00:13:46.311 [2024-07-25 10:55:53.376507] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.311 10:55:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:46.311 10:55:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:46.311 10:55:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@908 -- # (( 393216 == 393216 )) 00:13:46.311 10:55:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:13:46.878 [2024-07-25 10:55:53.873590] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:13:46.878 [2024-07-25 10:55:53.873671] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:13:46.878 [2024-07-25 10:55:53.873688] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.878 [2024-07-25 10:55:53.873705] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:13:46.878 [2024-07-25 10:55:53.873813] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.878 [2024-07-25 10:55:53.873856] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.878 [2024-07-25 10:55:53.873882] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:46.878 10:55:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:47.137 [2024-07-25 10:55:54.045953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:47.137 [2024-07-25 10:55:54.046015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.137 [2024-07-25 10:55:54.046041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:13:47.137 [2024-07-25 10:55:54.046059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.137 [2024-07-25 10:55:54.048842] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.137 [2024-07-25 10:55:54.048888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:47.137 [2024-07-25 10:55:54.051062] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 01b87254-8876-462e-980e-247386c46760 00:13:47.137 [2024-07-25 10:55:54.051154] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 01b87254-8876-462e-980e-247386c46760 is claimed 00:13:47.137 [2024-07-25 10:55:54.051314] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f96746e8-e2e3-4d5d-96c9-d9c0a16e7397 00:13:47.137 [2024-07-25 10:55:54.051343] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev f96746e8-e2e3-4d5d-96c9-d9c0a16e7397 is claimed 00:13:47.137 pt0 00:13:47.137 [2024-07-25 10:55:54.051523] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f96746e8-e2e3-4d5d-96c9-d9c0a16e7397 (2) smaller than existing raid bdev Raid (3) 00:13:47.137 [2024-07-25 10:55:54.051565] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:47.137 [2024-07-25 10:55:54.051577] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:13:47.137 [2024-07-25 10:55:54.051873] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:13:47.137 [2024-07-25 10:55:54.052113] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:47.137 [2024-07-25 10:55:54.052134] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:13:47.137 [2024-07-25 10:55:54.052345] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.137 10:55:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:47.137 10:55:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:47.137 10:55:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:47.137 10:55:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # jq '.[].num_blocks' 00:13:47.397 [2024-07-25 10:55:54.270954] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@918 -- # (( 393216 == 393216 )) 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 3542783 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3542783 ']' 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3542783 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3542783 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3542783' 00:13:47.397 killing process with pid 3542783 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 3542783 00:13:47.397 [2024-07-25 10:55:54.345757] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:47.397 10:55:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 3542783 00:13:47.397 [2024-07-25 10:55:54.345851] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.397 [2024-07-25 10:55:54.345905] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.397 [2024-07-25 10:55:54.345924] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:13:47.963 [2024-07-25 10:55:54.977950] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.866 10:55:56 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:13:49.866 00:13:49.866 real 0m10.115s 00:13:49.866 user 0m14.700s 00:13:49.866 sys 0m1.522s 00:13:49.866 10:55:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:49.866 10:55:56 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.866 ************************************ 00:13:49.866 END TEST raid0_resize_superblock_test 00:13:49.866 ************************************ 00:13:49.866 10:55:56 bdev_raid -- bdev/bdev_raid.sh@933 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:13:49.866 10:55:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:49.866 10:55:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:49.866 10:55:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.866 ************************************ 00:13:49.866 START TEST raid1_resize_superblock_test 00:13:49.866 ************************************ 00:13:49.866 10:55:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:13:49.866 10:55:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@868 -- # local raid_level=1 00:13:49.866 10:55:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # raid_pid=3544430 00:13:49.867 10:55:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@872 -- # echo 'Process raid pid: 3544430' 00:13:49.867 Process raid pid: 3544430 00:13:49.867 10:55:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@873 -- # waitforlisten 3544430 /var/tmp/spdk-raid.sock 00:13:49.867 10:55:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3544430 ']' 00:13:49.867 10:55:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:49.867 10:55:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:49.867 10:55:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:49.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:49.867 10:55:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:49.867 10:55:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.867 10:55:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:49.867 [2024-07-25 10:55:56.907097] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:49.867 [2024-07-25 10:55:56.907222] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:50.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:50.126 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:50.126 [2024-07-25 10:55:57.136004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.385 [2024-07-25 10:55:57.404186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.644 [2024-07-25 10:55:57.743317] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.644 [2024-07-25 10:55:57.743354] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.212 10:55:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.212 10:55:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:51.212 10:55:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create -b malloc0 512 512 00:13:52.149 malloc0 00:13:52.149 10:55:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@877 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:52.408 [2024-07-25 10:55:59.305006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:52.408 [2024-07-25 10:55:59.305074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.408 [2024-07-25 10:55:59.305106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:13:52.408 [2024-07-25 10:55:59.305126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.408 [2024-07-25 10:55:59.307894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.408 [2024-07-25 10:55:59.307932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:52.408 pt0 00:13:52.408 10:55:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@878 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create_lvstore pt0 lvs0 00:13:52.667 80b59dc0-e43a-4226-a7f9-f239f214001b 00:13:52.667 10:55:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol0 64 00:13:53.234 aef7ce40-11c3-4c54-9381-3d125995f8e7 00:13:53.234 10:56:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_create -l lvs0 lvol1 64 00:13:53.234 428cd74f-52ea-46de-ba0a-1df5da7c0d5f 00:13:53.493 10:56:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@883 -- # case $raid_level in 00:13:53.493 10:56:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -n Raid -r 1 -b 'lvs0/lvol0 lvs0/lvol1' -s 00:13:53.493 [2024-07-25 10:56:00.502209] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev aef7ce40-11c3-4c54-9381-3d125995f8e7 is claimed 00:13:53.493 [2024-07-25 10:56:00.502343] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 428cd74f-52ea-46de-ba0a-1df5da7c0d5f is claimed 00:13:53.493 [2024-07-25 10:56:00.502528] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:53.493 [2024-07-25 10:56:00.502549] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:13:53.493 [2024-07-25 10:56:00.502895] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:13:53.493 [2024-07-25 10:56:00.503191] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:53.493 [2024-07-25 10:56:00.503216] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:13:53.493 [2024-07-25 10:56:00.503463] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.493 10:56:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:53.493 10:56:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:13:53.751 10:56:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 64 == 64 )) 00:13:53.751 10:56:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:53.751 10:56:00 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:13:54.319 10:56:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 64 == 64 )) 00:13:54.319 10:56:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:54.319 10:56:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:54.319 10:56:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:54.319 10:56:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:13:54.319 [2024-07-25 10:56:01.392871] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.319 10:56:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:54.319 10:56:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:13:54.319 10:56:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 122880 == 122880 )) 00:13:54.319 10:56:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol0 100 00:13:54.577 [2024-07-25 10:56:01.609403] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:54.577 [2024-07-25 10:56:01.609439] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'aef7ce40-11c3-4c54-9381-3d125995f8e7' was resized: old size 131072, new size 204800 00:13:54.577 10:56:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_lvol_resize lvs0/lvol1 100 00:13:54.836 [2024-07-25 10:56:01.817883] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:54.836 [2024-07-25 10:56:01.817916] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '428cd74f-52ea-46de-ba0a-1df5da7c0d5f' was resized: old size 131072, new size 204800 00:13:54.836 [2024-07-25 10:56:01.817949] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:13:54.836 10:56:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol0 00:13:54.836 10:56:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # jq '.[].num_blocks' 00:13:55.404 10:56:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@903 -- # (( 100 == 100 )) 00:13:55.404 10:56:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b lvs0/lvol1 00:13:55.404 10:56:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # jq '.[].num_blocks' 00:13:55.692 10:56:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # (( 100 == 100 )) 00:13:55.692 10:56:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:55.692 10:56:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:55.692 10:56:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:55.692 10:56:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # jq '.[].num_blocks' 00:13:55.692 [2024-07-25 10:56:02.784638] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.951 10:56:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:55.951 10:56:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@907 -- # case $raid_level in 00:13:55.951 10:56:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # (( 196608 == 196608 )) 00:13:55.951 10:56:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@912 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt0 00:13:55.951 [2024-07-25 10:56:02.996956] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:13:55.951 [2024-07-25 10:56:02.997036] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:13:55.951 [2024-07-25 10:56:02.997069] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:13:55.951 [2024-07-25 10:56:02.997284] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.951 [2024-07-25 10:56:02.997500] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.951 [2024-07-25 10:56:02.997586] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.951 [2024-07-25 10:56:02.997611] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:13:55.951 10:56:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@913 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc0 -p pt0 00:13:56.516 [2024-07-25 10:56:03.490181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:13:56.516 [2024-07-25 10:56:03.490247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.516 [2024-07-25 10:56:03.490274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:13:56.516 [2024-07-25 10:56:03.490292] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.516 [2024-07-25 10:56:03.493074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.516 [2024-07-25 10:56:03.493111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:13:56.516 pt0 00:13:56.516 [2024-07-25 10:56:03.495353] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev aef7ce40-11c3-4c54-9381-3d125995f8e7 00:13:56.516 [2024-07-25 10:56:03.495437] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev aef7ce40-11c3-4c54-9381-3d125995f8e7 is claimed 00:13:56.516 [2024-07-25 10:56:03.495597] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 428cd74f-52ea-46de-ba0a-1df5da7c0d5f 00:13:56.516 [2024-07-25 10:56:03.495625] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev 428cd74f-52ea-46de-ba0a-1df5da7c0d5f is claimed 00:13:56.516 [2024-07-25 10:56:03.495809] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 428cd74f-52ea-46de-ba0a-1df5da7c0d5f (2) smaller than existing raid bdev Raid (3) 00:13:56.516 [2024-07-25 10:56:03.495852] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:56.516 [2024-07-25 10:56:03.495863] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:56.516 [2024-07-25 10:56:03.496173] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:13:56.516 [2024-07-25 10:56:03.496462] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:56.516 [2024-07-25 10:56:03.496484] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:13:56.516 [2024-07-25 10:56:03.496668] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.516 10:56:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:56.516 10:56:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:56.516 10:56:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:56.516 10:56:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # jq '.[].num_blocks' 00:13:56.775 [2024-07-25 10:56:03.731220] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@917 -- # case $raid_level in 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@919 -- # (( 196608 == 196608 )) 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@922 -- # killprocess 3544430 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3544430 ']' 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3544430 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3544430 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3544430' 00:13:56.775 killing process with pid 3544430 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 3544430 00:13:56.775 10:56:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 3544430 00:13:56.775 [2024-07-25 10:56:03.806806] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.775 [2024-07-25 10:56:03.806892] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.775 [2024-07-25 10:56:03.806952] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.775 [2024-07-25 10:56:03.806970] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:13:57.711 [2024-07-25 10:56:04.479417] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.087 10:56:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@924 -- # return 0 00:13:59.087 00:13:59.087 real 0m9.344s 00:13:59.087 user 0m13.515s 00:13:59.087 sys 0m1.492s 00:13:59.087 10:56:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.087 10:56:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.087 ************************************ 00:13:59.087 END TEST raid1_resize_superblock_test 00:13:59.087 ************************************ 00:13:59.087 10:56:06 bdev_raid -- bdev/bdev_raid.sh@935 -- # uname -s 00:13:59.087 10:56:06 bdev_raid -- bdev/bdev_raid.sh@935 -- # '[' Linux = Linux ']' 00:13:59.087 10:56:06 bdev_raid -- bdev/bdev_raid.sh@935 -- # modprobe -n nbd 00:13:59.087 10:56:06 bdev_raid -- bdev/bdev_raid.sh@936 -- # has_nbd=true 00:13:59.087 10:56:06 bdev_raid -- bdev/bdev_raid.sh@937 -- # modprobe nbd 00:13:59.346 10:56:06 bdev_raid -- bdev/bdev_raid.sh@938 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:59.346 10:56:06 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:59.346 10:56:06 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.346 10:56:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.346 ************************************ 00:13:59.346 START TEST raid_function_test_raid0 00:13:59.346 ************************************ 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=3546065 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 3546065' 00:13:59.346 Process raid pid: 3546065 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 3546065 /var/tmp/spdk-raid.sock 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 3546065 ']' 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:59.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:59.346 10:56:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:59.346 [2024-07-25 10:56:06.340130] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:59.346 [2024-07-25 10:56:06.340258] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.604 EAL: Requested device 0000:3d:01.0 cannot be used 00:13:59.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.604 EAL: Requested device 0000:3d:01.1 cannot be used 00:13:59.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.604 EAL: Requested device 0000:3d:01.2 cannot be used 00:13:59.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.604 EAL: Requested device 0000:3d:01.3 cannot be used 00:13:59.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.604 EAL: Requested device 0000:3d:01.4 cannot be used 00:13:59.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.604 EAL: Requested device 0000:3d:01.5 cannot be used 00:13:59.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.604 EAL: Requested device 0000:3d:01.6 cannot be used 00:13:59.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.604 EAL: Requested device 0000:3d:01.7 cannot be used 00:13:59.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.604 EAL: Requested device 0000:3d:02.0 cannot be used 00:13:59.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.604 EAL: Requested device 0000:3d:02.1 cannot be used 00:13:59.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.604 EAL: Requested device 0000:3d:02.2 cannot be used 00:13:59.604 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.604 EAL: Requested device 0000:3d:02.3 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3d:02.4 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3d:02.5 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3d:02.6 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3d:02.7 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:01.0 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:01.1 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:01.2 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:01.3 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:01.4 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:01.5 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:01.6 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:01.7 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:02.0 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:02.1 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:02.2 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:02.3 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:02.4 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:02.5 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:02.6 cannot be used 00:13:59.605 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:13:59.605 EAL: Requested device 0000:3f:02.7 cannot be used 00:13:59.605 [2024-07-25 10:56:06.573165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.863 [2024-07-25 10:56:06.863826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.122 [2024-07-25 10:56:07.216072] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.122 [2024-07-25 10:56:07.216114] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.380 10:56:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.380 10:56:07 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:14:00.380 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:14:00.380 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:14:00.380 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:14:00.380 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:14:00.380 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:00.639 [2024-07-25 10:56:07.745600] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:00.639 [2024-07-25 10:56:07.747927] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:00.639 [2024-07-25 10:56:07.747998] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:00.639 [2024-07-25 10:56:07.748021] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:00.639 [2024-07-25 10:56:07.748360] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:00.639 [2024-07-25 10:56:07.748565] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:00.639 [2024-07-25 10:56:07.748580] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:14:00.639 [2024-07-25 10:56:07.748765] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.639 Base_1 00:14:00.639 Base_2 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.897 10:56:07 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:01.156 [2024-07-25 10:56:08.194845] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:14:01.156 /dev/nbd0 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:01.156 1+0 records in 00:14:01.156 1+0 records out 00:14:01.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217019 s, 18.9 MB/s 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:01.156 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:01.415 { 00:14:01.415 "nbd_device": "/dev/nbd0", 00:14:01.415 "bdev_name": "raid" 00:14:01.415 } 00:14:01.415 ]' 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:01.415 { 00:14:01.415 "nbd_device": "/dev/nbd0", 00:14:01.415 "bdev_name": "raid" 00:14:01.415 } 00:14:01.415 ]' 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:01.415 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:01.673 4096+0 records in 00:14:01.673 4096+0 records out 00:14:01.673 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0458308 s, 45.8 MB/s 00:14:01.673 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:01.932 4096+0 records in 00:14:01.932 4096+0 records out 00:14:01.932 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.209472 s, 10.0 MB/s 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:01.932 128+0 records in 00:14:01.932 128+0 records out 00:14:01.932 65536 bytes (66 kB, 64 KiB) copied, 0.000818541 s, 80.1 MB/s 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:01.932 2035+0 records in 00:14:01.932 2035+0 records out 00:14:01.932 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0106797 s, 97.6 MB/s 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:01.932 456+0 records in 00:14:01.932 456+0 records out 00:14:01.932 233472 bytes (233 kB, 228 KiB) copied, 0.00275803 s, 84.7 MB/s 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.932 10:56:08 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:02.191 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.191 [2024-07-25 10:56:09.146958] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.191 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.191 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.191 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.191 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.191 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.191 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:14:02.191 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.191 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:02.191 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:02.191 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 3546065 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 3546065 ']' 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 3546065 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3546065 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3546065' 00:14:02.449 killing process with pid 3546065 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 3546065 00:14:02.449 [2024-07-25 10:56:09.496264] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.449 10:56:09 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 3546065 00:14:02.449 [2024-07-25 10:56:09.496374] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.449 [2024-07-25 10:56:09.496435] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.449 [2024-07-25 10:56:09.496454] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:14:02.708 [2024-07-25 10:56:09.694010] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:04.611 10:56:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:14:04.611 00:14:04.611 real 0m5.181s 00:14:04.611 user 0m6.056s 00:14:04.611 sys 0m1.283s 00:14:04.611 10:56:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.611 10:56:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:14:04.611 ************************************ 00:14:04.611 END TEST raid_function_test_raid0 00:14:04.611 ************************************ 00:14:04.611 10:56:11 bdev_raid -- bdev/bdev_raid.sh@939 -- # run_test raid_function_test_concat raid_function_test concat 00:14:04.611 10:56:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:04.611 10:56:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.611 10:56:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.611 ************************************ 00:14:04.611 START TEST raid_function_test_concat 00:14:04.611 ************************************ 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=3547069 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 3547069' 00:14:04.611 Process raid pid: 3547069 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 3547069 /var/tmp/spdk-raid.sock 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 3547069 ']' 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:04.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:04.611 10:56:11 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:04.611 [2024-07-25 10:56:11.594559] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:04.611 [2024-07-25 10:56:11.594673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:04.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:04.870 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:04.870 [2024-07-25 10:56:11.821046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.129 [2024-07-25 10:56:12.102942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.387 [2024-07-25 10:56:12.449783] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.387 [2024-07-25 10:56:12.449822] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.646 10:56:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:05.646 10:56:12 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:14:05.646 10:56:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:14:05.646 10:56:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:14:05.646 10:56:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:14:05.646 10:56:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:14:05.646 10:56:12 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:14:06.212 [2024-07-25 10:56:13.231216] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:06.212 [2024-07-25 10:56:13.233513] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:06.212 [2024-07-25 10:56:13.233590] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:06.212 [2024-07-25 10:56:13.233608] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:06.212 [2024-07-25 10:56:13.233935] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:06.212 [2024-07-25 10:56:13.234153] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:06.212 [2024-07-25 10:56:13.234168] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:14:06.212 [2024-07-25 10:56:13.234357] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.212 Base_1 00:14:06.212 Base_2 00:14:06.212 10:56:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:14:06.212 10:56:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:14:06.212 10:56:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:14:06.469 10:56:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:14:06.469 10:56:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:14:06.469 10:56:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:14:06.469 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:06.469 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:14:06.469 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.469 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:06.469 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.469 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:14:06.469 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.470 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.470 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:14:06.727 [2024-07-25 10:56:13.692495] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:14:06.727 /dev/nbd0 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.727 1+0 records in 00:14:06.727 1+0 records out 00:14:06.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239818 s, 17.1 MB/s 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:06.727 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:06.986 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:06.986 { 00:14:06.986 "nbd_device": "/dev/nbd0", 00:14:06.986 "bdev_name": "raid" 00:14:06.986 } 00:14:06.986 ]' 00:14:06.986 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:06.986 { 00:14:06.986 "nbd_device": "/dev/nbd0", 00:14:06.986 "bdev_name": "raid" 00:14:06.986 } 00:14:06.986 ]' 00:14:06.986 10:56:13 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:14:06.986 4096+0 records in 00:14:06.986 4096+0 records out 00:14:06.986 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0287914 s, 72.8 MB/s 00:14:06.986 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:14:07.553 4096+0 records in 00:14:07.553 4096+0 records out 00:14:07.553 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.301661 s, 7.0 MB/s 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:14:07.553 128+0 records in 00:14:07.553 128+0 records out 00:14:07.553 65536 bytes (66 kB, 64 KiB) copied, 0.000829957 s, 79.0 MB/s 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:14:07.553 2035+0 records in 00:14:07.553 2035+0 records out 00:14:07.553 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0112251 s, 92.8 MB/s 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:07.553 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:14:07.554 456+0 records in 00:14:07.554 456+0 records out 00:14:07.554 233472 bytes (233 kB, 228 KiB) copied, 0.00268078 s, 87.1 MB/s 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.554 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:07.813 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.813 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.813 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.813 [2024-07-25 10:56:14.720384] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.813 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.813 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.813 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.813 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:14:07.813 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.813 10:56:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:07.813 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:07.813 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:08.072 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:08.072 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:08.072 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:08.072 10:56:14 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 3547069 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 3547069 ']' 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 3547069 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3547069 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3547069' 00:14:08.072 killing process with pid 3547069 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 3547069 00:14:08.072 [2024-07-25 10:56:15.065598] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.072 10:56:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 3547069 00:14:08.072 [2024-07-25 10:56:15.065711] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:08.072 [2024-07-25 10:56:15.065772] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:08.072 [2024-07-25 10:56:15.065791] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:14:08.332 [2024-07-25 10:56:15.257246] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:10.300 10:56:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:14:10.300 00:14:10.300 real 0m5.443s 00:14:10.300 user 0m6.506s 00:14:10.300 sys 0m1.393s 00:14:10.300 10:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.300 10:56:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:14:10.300 ************************************ 00:14:10.300 END TEST raid_function_test_concat 00:14:10.300 ************************************ 00:14:10.300 10:56:16 bdev_raid -- bdev/bdev_raid.sh@942 -- # run_test raid0_resize_test raid_resize_test 0 00:14:10.300 10:56:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:10.300 10:56:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.300 10:56:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:10.300 ************************************ 00:14:10.300 START TEST raid0_resize_test 00:14:10.300 ************************************ 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=0 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=3548094 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 3548094' 00:14:10.300 Process raid pid: 3548094 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 3548094 /var/tmp/spdk-raid.sock 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 3548094 ']' 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:10.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.300 10:56:17 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:10.300 [2024-07-25 10:56:17.119233] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:10.300 [2024-07-25 10:56:17.119351] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:10.300 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:10.301 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:10.301 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:10.301 [2024-07-25 10:56:17.347876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.559 [2024-07-25 10:56:17.613287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.127 [2024-07-25 10:56:17.958875] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.127 [2024-07-25 10:56:17.958910] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:11.127 10:56:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.127 10:56:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:14:11.127 10:56:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:11.385 Base_1 00:14:11.385 10:56:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:11.953 Base_2 00:14:11.953 10:56:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 0 -eq 0 ']' 00:14:11.953 10:56:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:11.953 [2024-07-25 10:56:18.955303] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:11.953 [2024-07-25 10:56:18.957606] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:11.953 [2024-07-25 10:56:18.957674] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:11.953 [2024-07-25 10:56:18.957695] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:11.953 [2024-07-25 10:56:18.958040] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000103d0 00:14:11.953 [2024-07-25 10:56:18.958231] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:11.953 [2024-07-25 10:56:18.958245] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:14:11.953 [2024-07-25 10:56:18.958476] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.953 10:56:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:12.212 [2024-07-25 10:56:19.167817] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:12.212 [2024-07-25 10:56:19.167850] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:12.212 true 00:14:12.212 10:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:12.212 10:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:14:12.780 [2024-07-25 10:56:19.669430] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.780 10:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=131072 00:14:12.780 10:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=64 00:14:12.780 10:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 0 -eq 0 ']' 00:14:12.780 10:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # expected_size=64 00:14:12.780 10:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 64 '!=' 64 ']' 00:14:12.780 10:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:12.780 [2024-07-25 10:56:19.845673] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:12.780 [2024-07-25 10:56:19.845704] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:12.780 [2024-07-25 10:56:19.845744] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:14:12.780 true 00:14:12.780 10:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:12.780 10:56:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:14:13.349 [2024-07-25 10:56:20.343214] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=262144 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=128 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 0 -eq 0 ']' 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@393 -- # expected_size=128 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 128 '!=' 128 ']' 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 3548094 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 3548094 ']' 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 3548094 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3548094 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3548094' 00:14:13.349 killing process with pid 3548094 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 3548094 00:14:13.349 10:56:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 3548094 00:14:13.349 [2024-07-25 10:56:20.428502] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:13.349 [2024-07-25 10:56:20.428598] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.349 [2024-07-25 10:56:20.428659] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.349 [2024-07-25 10:56:20.428674] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:14:13.349 [2024-07-25 10:56:20.442903] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:15.251 10:56:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:14:15.251 00:14:15.251 real 0m5.099s 00:14:15.251 user 0m6.934s 00:14:15.251 sys 0m0.859s 00:14:15.251 10:56:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.251 10:56:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.251 ************************************ 00:14:15.251 END TEST raid0_resize_test 00:14:15.251 ************************************ 00:14:15.251 10:56:22 bdev_raid -- bdev/bdev_raid.sh@943 -- # run_test raid1_resize_test raid_resize_test 1 00:14:15.251 10:56:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:15.251 10:56:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:15.251 10:56:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:15.251 ************************************ 00:14:15.251 START TEST raid1_resize_test 00:14:15.251 ************************************ 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # local raid_level=1 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@348 -- # local blksize=512 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # local bdev_size_mb=32 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@350 -- # local new_bdev_size_mb=64 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@351 -- # local blkcnt 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # local raid_size_mb 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@353 -- # local new_raid_size_mb 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@354 -- # local expected_size 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@357 -- # raid_pid=3548925 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@358 -- # echo 'Process raid pid: 3548925' 00:14:15.251 Process raid pid: 3548925 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # waitforlisten 3548925 /var/tmp/spdk-raid.sock 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 3548925 ']' 00:14:15.251 10:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:15.252 10:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.252 10:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:15.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:15.252 10:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.252 10:56:22 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.252 [2024-07-25 10:56:22.298654] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:15.252 [2024-07-25 10:56:22.298774] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:15.511 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:15.511 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:15.511 [2024-07-25 10:56:22.528430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.771 [2024-07-25 10:56:22.819416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.338 [2024-07-25 10:56:23.174754] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.338 [2024-07-25 10:56:23.174805] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.338 10:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.338 10:56:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:14:16.338 10:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:16.596 Base_1 00:14:16.596 10:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@362 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:16.854 Base_2 00:14:16.854 10:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # '[' 1 -eq 0 ']' 00:14:16.854 10:56:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@367 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r 1 -b 'Base_1 Base_2' -n Raid 00:14:17.113 [2024-07-25 10:56:24.008231] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:17.113 [2024-07-25 10:56:24.010515] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:17.113 [2024-07-25 10:56:24.010585] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:17.113 [2024-07-25 10:56:24.010607] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:17.113 [2024-07-25 10:56:24.010956] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000103d0 00:14:17.113 [2024-07-25 10:56:24.011164] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:17.113 [2024-07-25 10:56:24.011184] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:14:17.113 [2024-07-25 10:56:24.011399] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.113 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:17.372 [2024-07-25 10:56:24.240784] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:17.372 [2024-07-25 10:56:24.240814] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:17.372 true 00:14:17.372 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:17.372 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # jq '.[].num_blocks' 00:14:17.372 [2024-07-25 10:56:24.469625] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.630 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@374 -- # blkcnt=65536 00:14:17.630 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # raid_size_mb=32 00:14:17.630 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # '[' 1 -eq 0 ']' 00:14:17.630 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@379 -- # expected_size=32 00:14:17.630 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 32 '!=' 32 ']' 00:14:17.630 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:17.630 [2024-07-25 10:56:24.702046] bdev_raid.c:2304:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:17.630 [2024-07-25 10:56:24.702080] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:17.630 [2024-07-25 10:56:24.702114] bdev_raid.c:2331:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:14:17.630 true 00:14:17.630 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:17.630 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # jq '.[].num_blocks' 00:14:17.889 [2024-07-25 10:56:24.870706] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@390 -- # blkcnt=131072 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@391 -- # raid_size_mb=64 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@392 -- # '[' 1 -eq 0 ']' 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@395 -- # expected_size=64 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@397 -- # '[' 64 '!=' 64 ']' 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@402 -- # killprocess 3548925 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 3548925 ']' 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 3548925 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3548925 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3548925' 00:14:17.889 killing process with pid 3548925 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 3548925 00:14:17.889 [2024-07-25 10:56:24.958405] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.889 [2024-07-25 10:56:24.958501] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.889 10:56:24 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 3548925 00:14:17.889 [2024-07-25 10:56:24.959021] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.889 [2024-07-25 10:56:24.959042] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:14:17.889 [2024-07-25 10:56:24.972611] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.794 10:56:26 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@404 -- # return 0 00:14:19.794 00:14:19.794 real 0m4.452s 00:14:19.794 user 0m5.694s 00:14:19.794 sys 0m0.789s 00:14:19.794 10:56:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.794 10:56:26 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.794 ************************************ 00:14:19.794 END TEST raid1_resize_test 00:14:19.794 ************************************ 00:14:19.794 10:56:26 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:14:19.794 10:56:26 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:14:19.794 10:56:26 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:19.794 10:56:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:19.794 10:56:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:19.794 10:56:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.794 ************************************ 00:14:19.794 START TEST raid_state_function_test 00:14:19.794 ************************************ 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3549751 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3549751' 00:14:19.794 Process raid pid: 3549751 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3549751 /var/tmp/spdk-raid.sock 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3549751 ']' 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:19.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:19.794 10:56:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.794 [2024-07-25 10:56:26.820132] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:19.794 [2024-07-25 10:56:26.820224] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:20.053 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:20.053 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:20.053 [2024-07-25 10:56:27.020986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.312 [2024-07-25 10:56:27.307560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.571 [2024-07-25 10:56:27.659993] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.571 [2024-07-25 10:56:27.660043] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.829 10:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:20.829 10:56:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:14:20.829 10:56:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:21.088 [2024-07-25 10:56:28.061518] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.088 [2024-07-25 10:56:28.061578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.088 [2024-07-25 10:56:28.061593] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.088 [2024-07-25 10:56:28.061610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.088 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:21.088 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:21.088 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:21.088 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:21.088 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:21.088 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:21.088 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.088 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.088 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.088 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.088 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.088 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.347 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:21.347 "name": "Existed_Raid", 00:14:21.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.347 "strip_size_kb": 64, 00:14:21.347 "state": "configuring", 00:14:21.347 "raid_level": "raid0", 00:14:21.347 "superblock": false, 00:14:21.347 "num_base_bdevs": 2, 00:14:21.347 "num_base_bdevs_discovered": 0, 00:14:21.347 "num_base_bdevs_operational": 2, 00:14:21.347 "base_bdevs_list": [ 00:14:21.347 { 00:14:21.347 "name": "BaseBdev1", 00:14:21.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.347 "is_configured": false, 00:14:21.347 "data_offset": 0, 00:14:21.347 "data_size": 0 00:14:21.347 }, 00:14:21.347 { 00:14:21.347 "name": "BaseBdev2", 00:14:21.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.347 "is_configured": false, 00:14:21.347 "data_offset": 0, 00:14:21.347 "data_size": 0 00:14:21.347 } 00:14:21.347 ] 00:14:21.347 }' 00:14:21.347 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:21.347 10:56:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.912 10:56:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:22.171 [2024-07-25 10:56:29.084347] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.171 [2024-07-25 10:56:29.084393] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:22.171 10:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:22.428 [2024-07-25 10:56:29.312986] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.428 [2024-07-25 10:56:29.313033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.428 [2024-07-25 10:56:29.313051] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.428 [2024-07-25 10:56:29.313069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.428 10:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:22.687 [2024-07-25 10:56:29.593935] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.687 BaseBdev1 00:14:22.687 10:56:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:22.687 10:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:22.687 10:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:22.687 10:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:22.687 10:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:22.687 10:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:22.687 10:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.945 10:56:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.945 [ 00:14:22.945 { 00:14:22.945 "name": "BaseBdev1", 00:14:22.945 "aliases": [ 00:14:22.945 "235aed2f-aecc-412b-af5e-0bec84d0022c" 00:14:22.945 ], 00:14:22.945 "product_name": "Malloc disk", 00:14:22.945 "block_size": 512, 00:14:22.945 "num_blocks": 65536, 00:14:22.945 "uuid": "235aed2f-aecc-412b-af5e-0bec84d0022c", 00:14:22.945 "assigned_rate_limits": { 00:14:22.945 "rw_ios_per_sec": 0, 00:14:22.945 "rw_mbytes_per_sec": 0, 00:14:22.945 "r_mbytes_per_sec": 0, 00:14:22.945 "w_mbytes_per_sec": 0 00:14:22.945 }, 00:14:22.945 "claimed": true, 00:14:22.945 "claim_type": "exclusive_write", 00:14:22.945 "zoned": false, 00:14:22.945 "supported_io_types": { 00:14:22.945 "read": true, 00:14:22.945 "write": true, 00:14:22.945 "unmap": true, 00:14:22.945 "flush": true, 00:14:22.945 "reset": true, 00:14:22.945 "nvme_admin": false, 00:14:22.945 "nvme_io": false, 00:14:22.945 "nvme_io_md": false, 00:14:22.945 "write_zeroes": true, 00:14:22.945 "zcopy": true, 00:14:22.945 "get_zone_info": false, 00:14:22.945 "zone_management": false, 00:14:22.945 "zone_append": false, 00:14:22.945 "compare": false, 00:14:22.945 "compare_and_write": false, 00:14:22.945 "abort": true, 00:14:22.945 "seek_hole": false, 00:14:22.945 "seek_data": false, 00:14:22.945 "copy": true, 00:14:22.945 "nvme_iov_md": false 00:14:22.945 }, 00:14:22.945 "memory_domains": [ 00:14:22.945 { 00:14:22.945 "dma_device_id": "system", 00:14:22.945 "dma_device_type": 1 00:14:22.945 }, 00:14:22.945 { 00:14:22.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.945 "dma_device_type": 2 00:14:22.945 } 00:14:22.945 ], 00:14:22.945 "driver_specific": {} 00:14:22.945 } 00:14:22.945 ] 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.204 "name": "Existed_Raid", 00:14:23.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.204 "strip_size_kb": 64, 00:14:23.204 "state": "configuring", 00:14:23.204 "raid_level": "raid0", 00:14:23.204 "superblock": false, 00:14:23.204 "num_base_bdevs": 2, 00:14:23.204 "num_base_bdevs_discovered": 1, 00:14:23.204 "num_base_bdevs_operational": 2, 00:14:23.204 "base_bdevs_list": [ 00:14:23.204 { 00:14:23.204 "name": "BaseBdev1", 00:14:23.204 "uuid": "235aed2f-aecc-412b-af5e-0bec84d0022c", 00:14:23.204 "is_configured": true, 00:14:23.204 "data_offset": 0, 00:14:23.204 "data_size": 65536 00:14:23.204 }, 00:14:23.204 { 00:14:23.204 "name": "BaseBdev2", 00:14:23.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.204 "is_configured": false, 00:14:23.204 "data_offset": 0, 00:14:23.204 "data_size": 0 00:14:23.204 } 00:14:23.204 ] 00:14:23.204 }' 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.204 10:56:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.772 10:56:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:24.030 [2024-07-25 10:56:31.045953] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:24.030 [2024-07-25 10:56:31.046018] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:24.031 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:24.353 [2024-07-25 10:56:31.278652] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.353 [2024-07-25 10:56:31.280975] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:24.353 [2024-07-25 10:56:31.281031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:24.353 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:24.353 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:24.353 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:24.354 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:24.354 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:24.354 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:24.354 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:24.354 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:24.354 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.354 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.354 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.354 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.354 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.354 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.612 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.612 "name": "Existed_Raid", 00:14:24.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.612 "strip_size_kb": 64, 00:14:24.612 "state": "configuring", 00:14:24.612 "raid_level": "raid0", 00:14:24.612 "superblock": false, 00:14:24.612 "num_base_bdevs": 2, 00:14:24.612 "num_base_bdevs_discovered": 1, 00:14:24.612 "num_base_bdevs_operational": 2, 00:14:24.612 "base_bdevs_list": [ 00:14:24.612 { 00:14:24.612 "name": "BaseBdev1", 00:14:24.612 "uuid": "235aed2f-aecc-412b-af5e-0bec84d0022c", 00:14:24.612 "is_configured": true, 00:14:24.612 "data_offset": 0, 00:14:24.612 "data_size": 65536 00:14:24.612 }, 00:14:24.612 { 00:14:24.612 "name": "BaseBdev2", 00:14:24.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.612 "is_configured": false, 00:14:24.612 "data_offset": 0, 00:14:24.612 "data_size": 0 00:14:24.612 } 00:14:24.612 ] 00:14:24.612 }' 00:14:24.612 10:56:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.612 10:56:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.179 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:25.179 [2024-07-25 10:56:32.278605] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.179 [2024-07-25 10:56:32.278657] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:25.179 [2024-07-25 10:56:32.278671] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:25.179 [2024-07-25 10:56:32.279022] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:25.179 [2024-07-25 10:56:32.279247] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:25.179 [2024-07-25 10:56:32.279265] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:25.179 [2024-07-25 10:56:32.279593] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.179 BaseBdev2 00:14:25.438 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:25.438 10:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:25.438 10:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:25.438 10:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:14:25.438 10:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:25.438 10:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:25.438 10:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:25.438 10:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:25.761 [ 00:14:25.761 { 00:14:25.761 "name": "BaseBdev2", 00:14:25.761 "aliases": [ 00:14:25.761 "a70c18b8-536f-4bba-b327-68e730bf80a2" 00:14:25.761 ], 00:14:25.761 "product_name": "Malloc disk", 00:14:25.761 "block_size": 512, 00:14:25.761 "num_blocks": 65536, 00:14:25.761 "uuid": "a70c18b8-536f-4bba-b327-68e730bf80a2", 00:14:25.761 "assigned_rate_limits": { 00:14:25.761 "rw_ios_per_sec": 0, 00:14:25.761 "rw_mbytes_per_sec": 0, 00:14:25.761 "r_mbytes_per_sec": 0, 00:14:25.761 "w_mbytes_per_sec": 0 00:14:25.761 }, 00:14:25.761 "claimed": true, 00:14:25.761 "claim_type": "exclusive_write", 00:14:25.761 "zoned": false, 00:14:25.761 "supported_io_types": { 00:14:25.761 "read": true, 00:14:25.761 "write": true, 00:14:25.761 "unmap": true, 00:14:25.761 "flush": true, 00:14:25.761 "reset": true, 00:14:25.761 "nvme_admin": false, 00:14:25.761 "nvme_io": false, 00:14:25.761 "nvme_io_md": false, 00:14:25.761 "write_zeroes": true, 00:14:25.761 "zcopy": true, 00:14:25.761 "get_zone_info": false, 00:14:25.761 "zone_management": false, 00:14:25.761 "zone_append": false, 00:14:25.761 "compare": false, 00:14:25.761 "compare_and_write": false, 00:14:25.761 "abort": true, 00:14:25.761 "seek_hole": false, 00:14:25.761 "seek_data": false, 00:14:25.761 "copy": true, 00:14:25.761 "nvme_iov_md": false 00:14:25.761 }, 00:14:25.761 "memory_domains": [ 00:14:25.761 { 00:14:25.761 "dma_device_id": "system", 00:14:25.761 "dma_device_type": 1 00:14:25.761 }, 00:14:25.761 { 00:14:25.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.761 "dma_device_type": 2 00:14:25.761 } 00:14:25.761 ], 00:14:25.761 "driver_specific": {} 00:14:25.761 } 00:14:25.761 ] 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.761 10:56:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.020 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.020 "name": "Existed_Raid", 00:14:26.020 "uuid": "929ee8be-9baf-4fa1-99e1-8905b657956e", 00:14:26.020 "strip_size_kb": 64, 00:14:26.020 "state": "online", 00:14:26.020 "raid_level": "raid0", 00:14:26.020 "superblock": false, 00:14:26.020 "num_base_bdevs": 2, 00:14:26.020 "num_base_bdevs_discovered": 2, 00:14:26.020 "num_base_bdevs_operational": 2, 00:14:26.020 "base_bdevs_list": [ 00:14:26.020 { 00:14:26.020 "name": "BaseBdev1", 00:14:26.020 "uuid": "235aed2f-aecc-412b-af5e-0bec84d0022c", 00:14:26.020 "is_configured": true, 00:14:26.020 "data_offset": 0, 00:14:26.020 "data_size": 65536 00:14:26.020 }, 00:14:26.020 { 00:14:26.020 "name": "BaseBdev2", 00:14:26.020 "uuid": "a70c18b8-536f-4bba-b327-68e730bf80a2", 00:14:26.020 "is_configured": true, 00:14:26.020 "data_offset": 0, 00:14:26.020 "data_size": 65536 00:14:26.020 } 00:14:26.020 ] 00:14:26.020 }' 00:14:26.020 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.020 10:56:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.586 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:26.586 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:26.586 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:26.586 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:26.586 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:26.587 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:26.587 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:26.587 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:26.845 [2024-07-25 10:56:33.742976] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.845 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:26.845 "name": "Existed_Raid", 00:14:26.845 "aliases": [ 00:14:26.845 "929ee8be-9baf-4fa1-99e1-8905b657956e" 00:14:26.845 ], 00:14:26.845 "product_name": "Raid Volume", 00:14:26.845 "block_size": 512, 00:14:26.845 "num_blocks": 131072, 00:14:26.845 "uuid": "929ee8be-9baf-4fa1-99e1-8905b657956e", 00:14:26.845 "assigned_rate_limits": { 00:14:26.845 "rw_ios_per_sec": 0, 00:14:26.845 "rw_mbytes_per_sec": 0, 00:14:26.845 "r_mbytes_per_sec": 0, 00:14:26.845 "w_mbytes_per_sec": 0 00:14:26.845 }, 00:14:26.845 "claimed": false, 00:14:26.845 "zoned": false, 00:14:26.845 "supported_io_types": { 00:14:26.845 "read": true, 00:14:26.845 "write": true, 00:14:26.845 "unmap": true, 00:14:26.845 "flush": true, 00:14:26.845 "reset": true, 00:14:26.845 "nvme_admin": false, 00:14:26.845 "nvme_io": false, 00:14:26.845 "nvme_io_md": false, 00:14:26.845 "write_zeroes": true, 00:14:26.845 "zcopy": false, 00:14:26.845 "get_zone_info": false, 00:14:26.845 "zone_management": false, 00:14:26.845 "zone_append": false, 00:14:26.845 "compare": false, 00:14:26.845 "compare_and_write": false, 00:14:26.845 "abort": false, 00:14:26.845 "seek_hole": false, 00:14:26.845 "seek_data": false, 00:14:26.845 "copy": false, 00:14:26.845 "nvme_iov_md": false 00:14:26.845 }, 00:14:26.845 "memory_domains": [ 00:14:26.845 { 00:14:26.845 "dma_device_id": "system", 00:14:26.845 "dma_device_type": 1 00:14:26.845 }, 00:14:26.845 { 00:14:26.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.845 "dma_device_type": 2 00:14:26.845 }, 00:14:26.845 { 00:14:26.845 "dma_device_id": "system", 00:14:26.845 "dma_device_type": 1 00:14:26.845 }, 00:14:26.845 { 00:14:26.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.845 "dma_device_type": 2 00:14:26.845 } 00:14:26.845 ], 00:14:26.845 "driver_specific": { 00:14:26.846 "raid": { 00:14:26.846 "uuid": "929ee8be-9baf-4fa1-99e1-8905b657956e", 00:14:26.846 "strip_size_kb": 64, 00:14:26.846 "state": "online", 00:14:26.846 "raid_level": "raid0", 00:14:26.846 "superblock": false, 00:14:26.846 "num_base_bdevs": 2, 00:14:26.846 "num_base_bdevs_discovered": 2, 00:14:26.846 "num_base_bdevs_operational": 2, 00:14:26.846 "base_bdevs_list": [ 00:14:26.846 { 00:14:26.846 "name": "BaseBdev1", 00:14:26.846 "uuid": "235aed2f-aecc-412b-af5e-0bec84d0022c", 00:14:26.846 "is_configured": true, 00:14:26.846 "data_offset": 0, 00:14:26.846 "data_size": 65536 00:14:26.846 }, 00:14:26.846 { 00:14:26.846 "name": "BaseBdev2", 00:14:26.846 "uuid": "a70c18b8-536f-4bba-b327-68e730bf80a2", 00:14:26.846 "is_configured": true, 00:14:26.846 "data_offset": 0, 00:14:26.846 "data_size": 65536 00:14:26.846 } 00:14:26.846 ] 00:14:26.846 } 00:14:26.846 } 00:14:26.846 }' 00:14:26.846 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.846 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:26.846 BaseBdev2' 00:14:26.846 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:26.846 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:26.846 10:56:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:27.104 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:27.104 "name": "BaseBdev1", 00:14:27.104 "aliases": [ 00:14:27.104 "235aed2f-aecc-412b-af5e-0bec84d0022c" 00:14:27.104 ], 00:14:27.104 "product_name": "Malloc disk", 00:14:27.104 "block_size": 512, 00:14:27.104 "num_blocks": 65536, 00:14:27.104 "uuid": "235aed2f-aecc-412b-af5e-0bec84d0022c", 00:14:27.104 "assigned_rate_limits": { 00:14:27.104 "rw_ios_per_sec": 0, 00:14:27.104 "rw_mbytes_per_sec": 0, 00:14:27.104 "r_mbytes_per_sec": 0, 00:14:27.104 "w_mbytes_per_sec": 0 00:14:27.104 }, 00:14:27.104 "claimed": true, 00:14:27.104 "claim_type": "exclusive_write", 00:14:27.104 "zoned": false, 00:14:27.104 "supported_io_types": { 00:14:27.104 "read": true, 00:14:27.104 "write": true, 00:14:27.104 "unmap": true, 00:14:27.104 "flush": true, 00:14:27.104 "reset": true, 00:14:27.104 "nvme_admin": false, 00:14:27.104 "nvme_io": false, 00:14:27.104 "nvme_io_md": false, 00:14:27.104 "write_zeroes": true, 00:14:27.104 "zcopy": true, 00:14:27.104 "get_zone_info": false, 00:14:27.104 "zone_management": false, 00:14:27.104 "zone_append": false, 00:14:27.104 "compare": false, 00:14:27.104 "compare_and_write": false, 00:14:27.104 "abort": true, 00:14:27.104 "seek_hole": false, 00:14:27.104 "seek_data": false, 00:14:27.104 "copy": true, 00:14:27.104 "nvme_iov_md": false 00:14:27.104 }, 00:14:27.104 "memory_domains": [ 00:14:27.104 { 00:14:27.104 "dma_device_id": "system", 00:14:27.104 "dma_device_type": 1 00:14:27.104 }, 00:14:27.104 { 00:14:27.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.104 "dma_device_type": 2 00:14:27.104 } 00:14:27.104 ], 00:14:27.104 "driver_specific": {} 00:14:27.104 }' 00:14:27.104 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.104 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.104 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:27.104 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.104 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.104 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:27.104 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.104 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.363 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:27.363 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.363 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.363 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:27.363 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:27.363 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:27.363 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:27.363 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:27.363 "name": "BaseBdev2", 00:14:27.363 "aliases": [ 00:14:27.363 "a70c18b8-536f-4bba-b327-68e730bf80a2" 00:14:27.363 ], 00:14:27.363 "product_name": "Malloc disk", 00:14:27.363 "block_size": 512, 00:14:27.363 "num_blocks": 65536, 00:14:27.363 "uuid": "a70c18b8-536f-4bba-b327-68e730bf80a2", 00:14:27.363 "assigned_rate_limits": { 00:14:27.363 "rw_ios_per_sec": 0, 00:14:27.363 "rw_mbytes_per_sec": 0, 00:14:27.363 "r_mbytes_per_sec": 0, 00:14:27.363 "w_mbytes_per_sec": 0 00:14:27.363 }, 00:14:27.363 "claimed": true, 00:14:27.363 "claim_type": "exclusive_write", 00:14:27.363 "zoned": false, 00:14:27.363 "supported_io_types": { 00:14:27.363 "read": true, 00:14:27.363 "write": true, 00:14:27.363 "unmap": true, 00:14:27.363 "flush": true, 00:14:27.363 "reset": true, 00:14:27.363 "nvme_admin": false, 00:14:27.363 "nvme_io": false, 00:14:27.363 "nvme_io_md": false, 00:14:27.363 "write_zeroes": true, 00:14:27.363 "zcopy": true, 00:14:27.363 "get_zone_info": false, 00:14:27.363 "zone_management": false, 00:14:27.363 "zone_append": false, 00:14:27.363 "compare": false, 00:14:27.363 "compare_and_write": false, 00:14:27.363 "abort": true, 00:14:27.363 "seek_hole": false, 00:14:27.363 "seek_data": false, 00:14:27.363 "copy": true, 00:14:27.363 "nvme_iov_md": false 00:14:27.363 }, 00:14:27.363 "memory_domains": [ 00:14:27.363 { 00:14:27.363 "dma_device_id": "system", 00:14:27.363 "dma_device_type": 1 00:14:27.363 }, 00:14:27.363 { 00:14:27.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.363 "dma_device_type": 2 00:14:27.363 } 00:14:27.363 ], 00:14:27.363 "driver_specific": {} 00:14:27.363 }' 00:14:27.363 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.622 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:27.622 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:27.622 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.622 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:27.622 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:27.622 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.622 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:27.622 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:27.622 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.880 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:27.880 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:27.880 10:56:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:28.139 [2024-07-25 10:56:35.006134] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:28.139 [2024-07-25 10:56:35.006184] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:28.139 [2024-07-25 10:56:35.006249] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.139 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.398 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.398 "name": "Existed_Raid", 00:14:28.398 "uuid": "929ee8be-9baf-4fa1-99e1-8905b657956e", 00:14:28.398 "strip_size_kb": 64, 00:14:28.398 "state": "offline", 00:14:28.398 "raid_level": "raid0", 00:14:28.398 "superblock": false, 00:14:28.398 "num_base_bdevs": 2, 00:14:28.398 "num_base_bdevs_discovered": 1, 00:14:28.398 "num_base_bdevs_operational": 1, 00:14:28.398 "base_bdevs_list": [ 00:14:28.398 { 00:14:28.398 "name": null, 00:14:28.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.398 "is_configured": false, 00:14:28.398 "data_offset": 0, 00:14:28.398 "data_size": 65536 00:14:28.398 }, 00:14:28.398 { 00:14:28.398 "name": "BaseBdev2", 00:14:28.398 "uuid": "a70c18b8-536f-4bba-b327-68e730bf80a2", 00:14:28.398 "is_configured": true, 00:14:28.398 "data_offset": 0, 00:14:28.398 "data_size": 65536 00:14:28.398 } 00:14:28.398 ] 00:14:28.398 }' 00:14:28.398 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.398 10:56:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.965 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:28.965 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:28.965 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.965 10:56:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:29.224 10:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:29.224 10:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:29.224 10:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:29.224 [2024-07-25 10:56:36.297440] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:29.224 [2024-07-25 10:56:36.297506] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:29.483 10:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:29.483 10:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:29.483 10:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.483 10:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3549751 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3549751 ']' 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3549751 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3549751 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3549751' 00:14:29.742 killing process with pid 3549751 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3549751 00:14:29.742 [2024-07-25 10:56:36.776480] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.742 10:56:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3549751 00:14:29.742 [2024-07-25 10:56:36.800282] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.647 10:56:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:31.647 00:14:31.647 real 0m11.764s 00:14:31.647 user 0m19.182s 00:14:31.647 sys 0m2.003s 00:14:31.647 10:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.647 10:56:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.647 ************************************ 00:14:31.647 END TEST raid_state_function_test 00:14:31.647 ************************************ 00:14:31.647 10:56:38 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:31.647 10:56:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:31.647 10:56:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.647 10:56:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.647 ************************************ 00:14:31.647 START TEST raid_state_function_test_sb 00:14:31.647 ************************************ 00:14:31.647 10:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:14:31.647 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:31.647 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3552083 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3552083' 00:14:31.648 Process raid pid: 3552083 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3552083 /var/tmp/spdk-raid.sock 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3552083 ']' 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:31.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:31.648 10:56:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.648 [2024-07-25 10:56:38.677867] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:31.648 [2024-07-25 10:56:38.677977] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:31.908 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:31.908 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:31.908 [2024-07-25 10:56:38.904665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.167 [2024-07-25 10:56:39.192909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.736 [2024-07-25 10:56:39.559639] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.736 [2024-07-25 10:56:39.559675] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.736 10:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:32.736 10:56:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:32.736 10:56:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:32.995 [2024-07-25 10:56:40.015528] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:32.995 [2024-07-25 10:56:40.015583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:32.995 [2024-07-25 10:56:40.015598] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.995 [2024-07-25 10:56:40.015614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.995 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:32.995 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:32.995 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:32.995 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:32.995 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:32.995 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:32.995 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.995 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.995 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.995 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.995 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.995 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.562 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.562 "name": "Existed_Raid", 00:14:33.562 "uuid": "d6e97b6d-2359-4474-8ebf-0629f54fa959", 00:14:33.562 "strip_size_kb": 64, 00:14:33.562 "state": "configuring", 00:14:33.562 "raid_level": "raid0", 00:14:33.562 "superblock": true, 00:14:33.562 "num_base_bdevs": 2, 00:14:33.562 "num_base_bdevs_discovered": 0, 00:14:33.562 "num_base_bdevs_operational": 2, 00:14:33.562 "base_bdevs_list": [ 00:14:33.562 { 00:14:33.562 "name": "BaseBdev1", 00:14:33.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.562 "is_configured": false, 00:14:33.562 "data_offset": 0, 00:14:33.562 "data_size": 0 00:14:33.562 }, 00:14:33.562 { 00:14:33.562 "name": "BaseBdev2", 00:14:33.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.562 "is_configured": false, 00:14:33.562 "data_offset": 0, 00:14:33.562 "data_size": 0 00:14:33.562 } 00:14:33.562 ] 00:14:33.562 }' 00:14:33.562 10:56:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.562 10:56:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.129 10:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:34.387 [2024-07-25 10:56:41.290791] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:34.387 [2024-07-25 10:56:41.290832] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:34.387 10:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:34.646 [2024-07-25 10:56:41.515446] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.646 [2024-07-25 10:56:41.515492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.646 [2024-07-25 10:56:41.515505] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.646 [2024-07-25 10:56:41.515522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.646 10:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:34.905 [2024-07-25 10:56:41.797337] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.905 BaseBdev1 00:14:34.905 10:56:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:34.905 10:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:34.905 10:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:34.905 10:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:34.905 10:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:34.905 10:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:34.905 10:56:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:35.163 10:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:35.163 [ 00:14:35.163 { 00:14:35.163 "name": "BaseBdev1", 00:14:35.163 "aliases": [ 00:14:35.163 "a9d5cc7e-9fea-4e13-afd9-bd6f86cb4ae3" 00:14:35.163 ], 00:14:35.163 "product_name": "Malloc disk", 00:14:35.163 "block_size": 512, 00:14:35.163 "num_blocks": 65536, 00:14:35.163 "uuid": "a9d5cc7e-9fea-4e13-afd9-bd6f86cb4ae3", 00:14:35.163 "assigned_rate_limits": { 00:14:35.163 "rw_ios_per_sec": 0, 00:14:35.163 "rw_mbytes_per_sec": 0, 00:14:35.163 "r_mbytes_per_sec": 0, 00:14:35.163 "w_mbytes_per_sec": 0 00:14:35.163 }, 00:14:35.163 "claimed": true, 00:14:35.164 "claim_type": "exclusive_write", 00:14:35.164 "zoned": false, 00:14:35.164 "supported_io_types": { 00:14:35.164 "read": true, 00:14:35.164 "write": true, 00:14:35.164 "unmap": true, 00:14:35.164 "flush": true, 00:14:35.164 "reset": true, 00:14:35.164 "nvme_admin": false, 00:14:35.164 "nvme_io": false, 00:14:35.164 "nvme_io_md": false, 00:14:35.164 "write_zeroes": true, 00:14:35.164 "zcopy": true, 00:14:35.164 "get_zone_info": false, 00:14:35.164 "zone_management": false, 00:14:35.164 "zone_append": false, 00:14:35.164 "compare": false, 00:14:35.164 "compare_and_write": false, 00:14:35.164 "abort": true, 00:14:35.164 "seek_hole": false, 00:14:35.164 "seek_data": false, 00:14:35.164 "copy": true, 00:14:35.164 "nvme_iov_md": false 00:14:35.164 }, 00:14:35.164 "memory_domains": [ 00:14:35.164 { 00:14:35.164 "dma_device_id": "system", 00:14:35.164 "dma_device_type": 1 00:14:35.164 }, 00:14:35.164 { 00:14:35.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.164 "dma_device_type": 2 00:14:35.164 } 00:14:35.164 ], 00:14:35.164 "driver_specific": {} 00:14:35.164 } 00:14:35.164 ] 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.164 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.423 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.423 "name": "Existed_Raid", 00:14:35.423 "uuid": "079a8734-8378-4e4f-b932-76f579d3ed78", 00:14:35.423 "strip_size_kb": 64, 00:14:35.423 "state": "configuring", 00:14:35.423 "raid_level": "raid0", 00:14:35.423 "superblock": true, 00:14:35.423 "num_base_bdevs": 2, 00:14:35.423 "num_base_bdevs_discovered": 1, 00:14:35.423 "num_base_bdevs_operational": 2, 00:14:35.423 "base_bdevs_list": [ 00:14:35.423 { 00:14:35.423 "name": "BaseBdev1", 00:14:35.423 "uuid": "a9d5cc7e-9fea-4e13-afd9-bd6f86cb4ae3", 00:14:35.423 "is_configured": true, 00:14:35.423 "data_offset": 2048, 00:14:35.423 "data_size": 63488 00:14:35.423 }, 00:14:35.423 { 00:14:35.423 "name": "BaseBdev2", 00:14:35.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.423 "is_configured": false, 00:14:35.423 "data_offset": 0, 00:14:35.423 "data_size": 0 00:14:35.423 } 00:14:35.423 ] 00:14:35.423 }' 00:14:35.423 10:56:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.423 10:56:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.991 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:36.251 [2024-07-25 10:56:43.285401] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.251 [2024-07-25 10:56:43.285453] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:36.251 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:36.508 [2024-07-25 10:56:43.514097] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.508 [2024-07-25 10:56:43.516391] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.508 [2024-07-25 10:56:43.516434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.508 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:36.508 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:36.508 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:36.509 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:36.509 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:36.509 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:36.509 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:36.509 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:36.509 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.509 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.509 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.509 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.509 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.509 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.768 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.768 "name": "Existed_Raid", 00:14:36.768 "uuid": "fffbfdf3-eb1e-4c10-8a38-673324e11c3a", 00:14:36.768 "strip_size_kb": 64, 00:14:36.768 "state": "configuring", 00:14:36.768 "raid_level": "raid0", 00:14:36.768 "superblock": true, 00:14:36.768 "num_base_bdevs": 2, 00:14:36.768 "num_base_bdevs_discovered": 1, 00:14:36.768 "num_base_bdevs_operational": 2, 00:14:36.768 "base_bdevs_list": [ 00:14:36.768 { 00:14:36.768 "name": "BaseBdev1", 00:14:36.768 "uuid": "a9d5cc7e-9fea-4e13-afd9-bd6f86cb4ae3", 00:14:36.768 "is_configured": true, 00:14:36.768 "data_offset": 2048, 00:14:36.768 "data_size": 63488 00:14:36.768 }, 00:14:36.768 { 00:14:36.768 "name": "BaseBdev2", 00:14:36.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.768 "is_configured": false, 00:14:36.768 "data_offset": 0, 00:14:36.768 "data_size": 0 00:14:36.768 } 00:14:36.768 ] 00:14:36.768 }' 00:14:36.768 10:56:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.768 10:56:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.335 10:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:37.594 [2024-07-25 10:56:44.589901] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.594 [2024-07-25 10:56:44.590175] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:37.594 [2024-07-25 10:56:44.590197] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:37.594 [2024-07-25 10:56:44.590526] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:37.594 [2024-07-25 10:56:44.590746] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:37.594 [2024-07-25 10:56:44.590764] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:37.594 [2024-07-25 10:56:44.590946] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.594 BaseBdev2 00:14:37.594 10:56:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:37.594 10:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:37.594 10:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:37.594 10:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:37.594 10:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:37.594 10:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:37.594 10:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:37.853 10:56:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:38.112 [ 00:14:38.112 { 00:14:38.112 "name": "BaseBdev2", 00:14:38.112 "aliases": [ 00:14:38.112 "122f6643-2a76-4eb8-adf7-848cff30ce26" 00:14:38.112 ], 00:14:38.112 "product_name": "Malloc disk", 00:14:38.112 "block_size": 512, 00:14:38.112 "num_blocks": 65536, 00:14:38.112 "uuid": "122f6643-2a76-4eb8-adf7-848cff30ce26", 00:14:38.112 "assigned_rate_limits": { 00:14:38.112 "rw_ios_per_sec": 0, 00:14:38.112 "rw_mbytes_per_sec": 0, 00:14:38.112 "r_mbytes_per_sec": 0, 00:14:38.112 "w_mbytes_per_sec": 0 00:14:38.112 }, 00:14:38.112 "claimed": true, 00:14:38.112 "claim_type": "exclusive_write", 00:14:38.112 "zoned": false, 00:14:38.112 "supported_io_types": { 00:14:38.112 "read": true, 00:14:38.112 "write": true, 00:14:38.112 "unmap": true, 00:14:38.112 "flush": true, 00:14:38.112 "reset": true, 00:14:38.112 "nvme_admin": false, 00:14:38.112 "nvme_io": false, 00:14:38.112 "nvme_io_md": false, 00:14:38.112 "write_zeroes": true, 00:14:38.112 "zcopy": true, 00:14:38.112 "get_zone_info": false, 00:14:38.112 "zone_management": false, 00:14:38.112 "zone_append": false, 00:14:38.112 "compare": false, 00:14:38.112 "compare_and_write": false, 00:14:38.112 "abort": true, 00:14:38.112 "seek_hole": false, 00:14:38.112 "seek_data": false, 00:14:38.112 "copy": true, 00:14:38.112 "nvme_iov_md": false 00:14:38.112 }, 00:14:38.112 "memory_domains": [ 00:14:38.112 { 00:14:38.112 "dma_device_id": "system", 00:14:38.112 "dma_device_type": 1 00:14:38.112 }, 00:14:38.112 { 00:14:38.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.112 "dma_device_type": 2 00:14:38.112 } 00:14:38.112 ], 00:14:38.112 "driver_specific": {} 00:14:38.112 } 00:14:38.112 ] 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.112 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.371 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:38.371 "name": "Existed_Raid", 00:14:38.371 "uuid": "fffbfdf3-eb1e-4c10-8a38-673324e11c3a", 00:14:38.371 "strip_size_kb": 64, 00:14:38.371 "state": "online", 00:14:38.371 "raid_level": "raid0", 00:14:38.371 "superblock": true, 00:14:38.371 "num_base_bdevs": 2, 00:14:38.371 "num_base_bdevs_discovered": 2, 00:14:38.371 "num_base_bdevs_operational": 2, 00:14:38.371 "base_bdevs_list": [ 00:14:38.371 { 00:14:38.371 "name": "BaseBdev1", 00:14:38.371 "uuid": "a9d5cc7e-9fea-4e13-afd9-bd6f86cb4ae3", 00:14:38.371 "is_configured": true, 00:14:38.371 "data_offset": 2048, 00:14:38.371 "data_size": 63488 00:14:38.371 }, 00:14:38.371 { 00:14:38.371 "name": "BaseBdev2", 00:14:38.371 "uuid": "122f6643-2a76-4eb8-adf7-848cff30ce26", 00:14:38.371 "is_configured": true, 00:14:38.371 "data_offset": 2048, 00:14:38.371 "data_size": 63488 00:14:38.371 } 00:14:38.371 ] 00:14:38.371 }' 00:14:38.371 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:38.371 10:56:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.938 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:38.938 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:38.938 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:38.938 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:38.938 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:38.938 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:38.938 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:38.938 10:56:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:39.198 [2024-07-25 10:56:46.086334] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:39.198 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:39.198 "name": "Existed_Raid", 00:14:39.198 "aliases": [ 00:14:39.198 "fffbfdf3-eb1e-4c10-8a38-673324e11c3a" 00:14:39.198 ], 00:14:39.198 "product_name": "Raid Volume", 00:14:39.198 "block_size": 512, 00:14:39.198 "num_blocks": 126976, 00:14:39.198 "uuid": "fffbfdf3-eb1e-4c10-8a38-673324e11c3a", 00:14:39.198 "assigned_rate_limits": { 00:14:39.198 "rw_ios_per_sec": 0, 00:14:39.198 "rw_mbytes_per_sec": 0, 00:14:39.198 "r_mbytes_per_sec": 0, 00:14:39.198 "w_mbytes_per_sec": 0 00:14:39.198 }, 00:14:39.198 "claimed": false, 00:14:39.198 "zoned": false, 00:14:39.198 "supported_io_types": { 00:14:39.198 "read": true, 00:14:39.198 "write": true, 00:14:39.198 "unmap": true, 00:14:39.198 "flush": true, 00:14:39.198 "reset": true, 00:14:39.198 "nvme_admin": false, 00:14:39.198 "nvme_io": false, 00:14:39.198 "nvme_io_md": false, 00:14:39.198 "write_zeroes": true, 00:14:39.198 "zcopy": false, 00:14:39.198 "get_zone_info": false, 00:14:39.198 "zone_management": false, 00:14:39.198 "zone_append": false, 00:14:39.198 "compare": false, 00:14:39.198 "compare_and_write": false, 00:14:39.198 "abort": false, 00:14:39.198 "seek_hole": false, 00:14:39.198 "seek_data": false, 00:14:39.198 "copy": false, 00:14:39.198 "nvme_iov_md": false 00:14:39.198 }, 00:14:39.198 "memory_domains": [ 00:14:39.198 { 00:14:39.198 "dma_device_id": "system", 00:14:39.198 "dma_device_type": 1 00:14:39.198 }, 00:14:39.198 { 00:14:39.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.198 "dma_device_type": 2 00:14:39.198 }, 00:14:39.198 { 00:14:39.198 "dma_device_id": "system", 00:14:39.198 "dma_device_type": 1 00:14:39.198 }, 00:14:39.198 { 00:14:39.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.198 "dma_device_type": 2 00:14:39.198 } 00:14:39.198 ], 00:14:39.198 "driver_specific": { 00:14:39.198 "raid": { 00:14:39.198 "uuid": "fffbfdf3-eb1e-4c10-8a38-673324e11c3a", 00:14:39.198 "strip_size_kb": 64, 00:14:39.198 "state": "online", 00:14:39.198 "raid_level": "raid0", 00:14:39.198 "superblock": true, 00:14:39.198 "num_base_bdevs": 2, 00:14:39.198 "num_base_bdevs_discovered": 2, 00:14:39.198 "num_base_bdevs_operational": 2, 00:14:39.198 "base_bdevs_list": [ 00:14:39.198 { 00:14:39.198 "name": "BaseBdev1", 00:14:39.198 "uuid": "a9d5cc7e-9fea-4e13-afd9-bd6f86cb4ae3", 00:14:39.198 "is_configured": true, 00:14:39.198 "data_offset": 2048, 00:14:39.198 "data_size": 63488 00:14:39.198 }, 00:14:39.198 { 00:14:39.198 "name": "BaseBdev2", 00:14:39.198 "uuid": "122f6643-2a76-4eb8-adf7-848cff30ce26", 00:14:39.198 "is_configured": true, 00:14:39.198 "data_offset": 2048, 00:14:39.198 "data_size": 63488 00:14:39.198 } 00:14:39.198 ] 00:14:39.198 } 00:14:39.198 } 00:14:39.198 }' 00:14:39.198 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:39.198 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:39.198 BaseBdev2' 00:14:39.198 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:39.198 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:39.198 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:39.198 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:39.198 "name": "BaseBdev1", 00:14:39.198 "aliases": [ 00:14:39.198 "a9d5cc7e-9fea-4e13-afd9-bd6f86cb4ae3" 00:14:39.198 ], 00:14:39.198 "product_name": "Malloc disk", 00:14:39.198 "block_size": 512, 00:14:39.198 "num_blocks": 65536, 00:14:39.198 "uuid": "a9d5cc7e-9fea-4e13-afd9-bd6f86cb4ae3", 00:14:39.198 "assigned_rate_limits": { 00:14:39.198 "rw_ios_per_sec": 0, 00:14:39.198 "rw_mbytes_per_sec": 0, 00:14:39.198 "r_mbytes_per_sec": 0, 00:14:39.198 "w_mbytes_per_sec": 0 00:14:39.198 }, 00:14:39.198 "claimed": true, 00:14:39.198 "claim_type": "exclusive_write", 00:14:39.198 "zoned": false, 00:14:39.198 "supported_io_types": { 00:14:39.198 "read": true, 00:14:39.198 "write": true, 00:14:39.198 "unmap": true, 00:14:39.198 "flush": true, 00:14:39.198 "reset": true, 00:14:39.198 "nvme_admin": false, 00:14:39.198 "nvme_io": false, 00:14:39.198 "nvme_io_md": false, 00:14:39.198 "write_zeroes": true, 00:14:39.198 "zcopy": true, 00:14:39.198 "get_zone_info": false, 00:14:39.198 "zone_management": false, 00:14:39.198 "zone_append": false, 00:14:39.198 "compare": false, 00:14:39.198 "compare_and_write": false, 00:14:39.198 "abort": true, 00:14:39.198 "seek_hole": false, 00:14:39.198 "seek_data": false, 00:14:39.198 "copy": true, 00:14:39.198 "nvme_iov_md": false 00:14:39.198 }, 00:14:39.198 "memory_domains": [ 00:14:39.198 { 00:14:39.198 "dma_device_id": "system", 00:14:39.198 "dma_device_type": 1 00:14:39.198 }, 00:14:39.198 { 00:14:39.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.198 "dma_device_type": 2 00:14:39.198 } 00:14:39.198 ], 00:14:39.198 "driver_specific": {} 00:14:39.198 }' 00:14:39.198 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.457 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.457 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:39.457 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.457 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.457 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:39.457 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.457 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:39.457 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:39.457 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.716 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:39.716 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:39.716 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:39.716 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:39.716 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:39.975 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:39.975 "name": "BaseBdev2", 00:14:39.975 "aliases": [ 00:14:39.975 "122f6643-2a76-4eb8-adf7-848cff30ce26" 00:14:39.975 ], 00:14:39.975 "product_name": "Malloc disk", 00:14:39.975 "block_size": 512, 00:14:39.975 "num_blocks": 65536, 00:14:39.975 "uuid": "122f6643-2a76-4eb8-adf7-848cff30ce26", 00:14:39.975 "assigned_rate_limits": { 00:14:39.975 "rw_ios_per_sec": 0, 00:14:39.975 "rw_mbytes_per_sec": 0, 00:14:39.975 "r_mbytes_per_sec": 0, 00:14:39.975 "w_mbytes_per_sec": 0 00:14:39.975 }, 00:14:39.975 "claimed": true, 00:14:39.975 "claim_type": "exclusive_write", 00:14:39.975 "zoned": false, 00:14:39.975 "supported_io_types": { 00:14:39.975 "read": true, 00:14:39.975 "write": true, 00:14:39.975 "unmap": true, 00:14:39.975 "flush": true, 00:14:39.975 "reset": true, 00:14:39.975 "nvme_admin": false, 00:14:39.975 "nvme_io": false, 00:14:39.975 "nvme_io_md": false, 00:14:39.975 "write_zeroes": true, 00:14:39.975 "zcopy": true, 00:14:39.975 "get_zone_info": false, 00:14:39.975 "zone_management": false, 00:14:39.975 "zone_append": false, 00:14:39.975 "compare": false, 00:14:39.975 "compare_and_write": false, 00:14:39.976 "abort": true, 00:14:39.976 "seek_hole": false, 00:14:39.976 "seek_data": false, 00:14:39.976 "copy": true, 00:14:39.976 "nvme_iov_md": false 00:14:39.976 }, 00:14:39.976 "memory_domains": [ 00:14:39.976 { 00:14:39.976 "dma_device_id": "system", 00:14:39.976 "dma_device_type": 1 00:14:39.976 }, 00:14:39.976 { 00:14:39.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.976 "dma_device_type": 2 00:14:39.976 } 00:14:39.976 ], 00:14:39.976 "driver_specific": {} 00:14:39.976 }' 00:14:39.976 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.976 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:39.976 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:39.976 10:56:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.976 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:39.976 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:39.976 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:40.235 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:40.235 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:40.235 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:40.235 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:40.235 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:40.235 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:40.494 [2024-07-25 10:56:47.449763] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:40.494 [2024-07-25 10:56:47.449800] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.494 [2024-07-25 10:56:47.449857] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.494 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.753 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.753 "name": "Existed_Raid", 00:14:40.753 "uuid": "fffbfdf3-eb1e-4c10-8a38-673324e11c3a", 00:14:40.753 "strip_size_kb": 64, 00:14:40.753 "state": "offline", 00:14:40.753 "raid_level": "raid0", 00:14:40.753 "superblock": true, 00:14:40.753 "num_base_bdevs": 2, 00:14:40.753 "num_base_bdevs_discovered": 1, 00:14:40.753 "num_base_bdevs_operational": 1, 00:14:40.753 "base_bdevs_list": [ 00:14:40.753 { 00:14:40.753 "name": null, 00:14:40.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.753 "is_configured": false, 00:14:40.753 "data_offset": 2048, 00:14:40.753 "data_size": 63488 00:14:40.753 }, 00:14:40.753 { 00:14:40.753 "name": "BaseBdev2", 00:14:40.753 "uuid": "122f6643-2a76-4eb8-adf7-848cff30ce26", 00:14:40.753 "is_configured": true, 00:14:40.753 "data_offset": 2048, 00:14:40.753 "data_size": 63488 00:14:40.753 } 00:14:40.753 ] 00:14:40.753 }' 00:14:40.753 10:56:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.753 10:56:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.321 10:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:41.321 10:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:41.321 10:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:41.321 10:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.580 10:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:41.580 10:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:41.580 10:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:41.580 [2024-07-25 10:56:48.686155] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:41.580 [2024-07-25 10:56:48.686211] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:41.839 10:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:41.839 10:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:41.839 10:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:41.839 10:56:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3552083 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3552083 ']' 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3552083 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3552083 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3552083' 00:14:42.098 killing process with pid 3552083 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3552083 00:14:42.098 [2024-07-25 10:56:49.135700] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.098 10:56:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3552083 00:14:42.098 [2024-07-25 10:56:49.158319] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.999 10:56:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:43.999 00:14:43.999 real 0m12.252s 00:14:43.999 user 0m20.282s 00:14:43.999 sys 0m2.113s 00:14:44.000 10:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:44.000 10:56:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.000 ************************************ 00:14:44.000 END TEST raid_state_function_test_sb 00:14:44.000 ************************************ 00:14:44.000 10:56:50 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:44.000 10:56:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:44.000 10:56:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:44.000 10:56:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:44.000 ************************************ 00:14:44.000 START TEST raid_superblock_test 00:14:44.000 ************************************ 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3554421 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3554421 /var/tmp/spdk-raid.sock 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3554421 ']' 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:44.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.000 10:56:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.000 [2024-07-25 10:56:51.010358] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:44.000 [2024-07-25 10:56:51.010478] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3554421 ] 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.259 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:44.259 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.260 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:44.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.260 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:44.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.260 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:44.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.260 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:44.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.260 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:44.260 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:44.260 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:44.260 [2024-07-25 10:56:51.237996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.518 [2024-07-25 10:56:51.533397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.777 [2024-07-25 10:56:51.895147] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:44.777 [2024-07-25 10:56:51.895183] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:45.036 10:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.036 10:56:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:45.036 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:14:45.036 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:45.036 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:14:45.036 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:14:45.036 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:45.036 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:45.036 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:45.036 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:45.036 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:45.295 malloc1 00:14:45.295 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:45.554 [2024-07-25 10:56:52.583445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:45.554 [2024-07-25 10:56:52.583507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.554 [2024-07-25 10:56:52.583539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:14:45.554 [2024-07-25 10:56:52.583555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.554 [2024-07-25 10:56:52.586305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.554 [2024-07-25 10:56:52.586340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:45.554 pt1 00:14:45.554 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:45.554 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:45.554 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:14:45.554 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:14:45.554 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:45.554 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:45.554 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:14:45.554 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:45.554 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:45.813 malloc2 00:14:45.813 10:56:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:46.072 [2024-07-25 10:56:53.089075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:46.072 [2024-07-25 10:56:53.089132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.072 [2024-07-25 10:56:53.089167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:14:46.072 [2024-07-25 10:56:53.089184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.072 [2024-07-25 10:56:53.091942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.072 [2024-07-25 10:56:53.091983] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:46.072 pt2 00:14:46.072 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:14:46.072 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:14:46.072 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:46.331 [2024-07-25 10:56:53.317718] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:46.331 [2024-07-25 10:56:53.320052] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:46.331 [2024-07-25 10:56:53.320255] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:46.331 [2024-07-25 10:56:53.320273] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:46.331 [2024-07-25 10:56:53.320619] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:46.331 [2024-07-25 10:56:53.320851] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:46.331 [2024-07-25 10:56:53.320870] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:46.331 [2024-07-25 10:56:53.321077] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.331 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:46.331 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:46.331 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:46.331 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:46.331 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:46.331 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:46.331 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.331 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.331 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.331 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.331 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.331 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.590 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.590 "name": "raid_bdev1", 00:14:46.590 "uuid": "e6ebf65f-0b22-4b68-9609-ce6e11ef1d74", 00:14:46.590 "strip_size_kb": 64, 00:14:46.590 "state": "online", 00:14:46.590 "raid_level": "raid0", 00:14:46.590 "superblock": true, 00:14:46.590 "num_base_bdevs": 2, 00:14:46.590 "num_base_bdevs_discovered": 2, 00:14:46.590 "num_base_bdevs_operational": 2, 00:14:46.590 "base_bdevs_list": [ 00:14:46.590 { 00:14:46.590 "name": "pt1", 00:14:46.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:46.590 "is_configured": true, 00:14:46.590 "data_offset": 2048, 00:14:46.590 "data_size": 63488 00:14:46.590 }, 00:14:46.590 { 00:14:46.590 "name": "pt2", 00:14:46.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:46.590 "is_configured": true, 00:14:46.590 "data_offset": 2048, 00:14:46.590 "data_size": 63488 00:14:46.590 } 00:14:46.590 ] 00:14:46.590 }' 00:14:46.590 10:56:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.590 10:56:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.539 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:14:47.539 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:47.539 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:47.539 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:47.539 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:47.539 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:47.539 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:47.539 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:47.539 [2024-07-25 10:56:54.617609] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.539 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:47.539 "name": "raid_bdev1", 00:14:47.539 "aliases": [ 00:14:47.539 "e6ebf65f-0b22-4b68-9609-ce6e11ef1d74" 00:14:47.539 ], 00:14:47.539 "product_name": "Raid Volume", 00:14:47.539 "block_size": 512, 00:14:47.539 "num_blocks": 126976, 00:14:47.539 "uuid": "e6ebf65f-0b22-4b68-9609-ce6e11ef1d74", 00:14:47.539 "assigned_rate_limits": { 00:14:47.539 "rw_ios_per_sec": 0, 00:14:47.539 "rw_mbytes_per_sec": 0, 00:14:47.539 "r_mbytes_per_sec": 0, 00:14:47.539 "w_mbytes_per_sec": 0 00:14:47.539 }, 00:14:47.539 "claimed": false, 00:14:47.539 "zoned": false, 00:14:47.539 "supported_io_types": { 00:14:47.539 "read": true, 00:14:47.539 "write": true, 00:14:47.539 "unmap": true, 00:14:47.539 "flush": true, 00:14:47.539 "reset": true, 00:14:47.539 "nvme_admin": false, 00:14:47.539 "nvme_io": false, 00:14:47.539 "nvme_io_md": false, 00:14:47.539 "write_zeroes": true, 00:14:47.539 "zcopy": false, 00:14:47.539 "get_zone_info": false, 00:14:47.539 "zone_management": false, 00:14:47.539 "zone_append": false, 00:14:47.539 "compare": false, 00:14:47.539 "compare_and_write": false, 00:14:47.539 "abort": false, 00:14:47.539 "seek_hole": false, 00:14:47.539 "seek_data": false, 00:14:47.539 "copy": false, 00:14:47.539 "nvme_iov_md": false 00:14:47.539 }, 00:14:47.539 "memory_domains": [ 00:14:47.539 { 00:14:47.539 "dma_device_id": "system", 00:14:47.539 "dma_device_type": 1 00:14:47.539 }, 00:14:47.539 { 00:14:47.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.539 "dma_device_type": 2 00:14:47.539 }, 00:14:47.539 { 00:14:47.539 "dma_device_id": "system", 00:14:47.539 "dma_device_type": 1 00:14:47.539 }, 00:14:47.539 { 00:14:47.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.539 "dma_device_type": 2 00:14:47.539 } 00:14:47.539 ], 00:14:47.539 "driver_specific": { 00:14:47.539 "raid": { 00:14:47.539 "uuid": "e6ebf65f-0b22-4b68-9609-ce6e11ef1d74", 00:14:47.539 "strip_size_kb": 64, 00:14:47.539 "state": "online", 00:14:47.539 "raid_level": "raid0", 00:14:47.539 "superblock": true, 00:14:47.539 "num_base_bdevs": 2, 00:14:47.539 "num_base_bdevs_discovered": 2, 00:14:47.539 "num_base_bdevs_operational": 2, 00:14:47.539 "base_bdevs_list": [ 00:14:47.539 { 00:14:47.539 "name": "pt1", 00:14:47.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:47.539 "is_configured": true, 00:14:47.539 "data_offset": 2048, 00:14:47.539 "data_size": 63488 00:14:47.539 }, 00:14:47.539 { 00:14:47.539 "name": "pt2", 00:14:47.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:47.539 "is_configured": true, 00:14:47.539 "data_offset": 2048, 00:14:47.539 "data_size": 63488 00:14:47.539 } 00:14:47.539 ] 00:14:47.539 } 00:14:47.539 } 00:14:47.539 }' 00:14:47.539 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:47.798 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:47.798 pt2' 00:14:47.798 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:47.798 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:47.798 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:47.798 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:47.798 "name": "pt1", 00:14:47.798 "aliases": [ 00:14:47.798 "00000000-0000-0000-0000-000000000001" 00:14:47.798 ], 00:14:47.798 "product_name": "passthru", 00:14:47.798 "block_size": 512, 00:14:47.798 "num_blocks": 65536, 00:14:47.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:47.798 "assigned_rate_limits": { 00:14:47.798 "rw_ios_per_sec": 0, 00:14:47.798 "rw_mbytes_per_sec": 0, 00:14:47.798 "r_mbytes_per_sec": 0, 00:14:47.798 "w_mbytes_per_sec": 0 00:14:47.798 }, 00:14:47.798 "claimed": true, 00:14:47.798 "claim_type": "exclusive_write", 00:14:47.798 "zoned": false, 00:14:47.798 "supported_io_types": { 00:14:47.798 "read": true, 00:14:47.798 "write": true, 00:14:47.798 "unmap": true, 00:14:47.798 "flush": true, 00:14:47.798 "reset": true, 00:14:47.798 "nvme_admin": false, 00:14:47.798 "nvme_io": false, 00:14:47.798 "nvme_io_md": false, 00:14:47.798 "write_zeroes": true, 00:14:47.798 "zcopy": true, 00:14:47.798 "get_zone_info": false, 00:14:47.798 "zone_management": false, 00:14:47.798 "zone_append": false, 00:14:47.798 "compare": false, 00:14:47.798 "compare_and_write": false, 00:14:47.798 "abort": true, 00:14:47.798 "seek_hole": false, 00:14:47.798 "seek_data": false, 00:14:47.798 "copy": true, 00:14:47.798 "nvme_iov_md": false 00:14:47.798 }, 00:14:47.798 "memory_domains": [ 00:14:47.798 { 00:14:47.798 "dma_device_id": "system", 00:14:47.798 "dma_device_type": 1 00:14:47.798 }, 00:14:47.798 { 00:14:47.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.798 "dma_device_type": 2 00:14:47.798 } 00:14:47.798 ], 00:14:47.798 "driver_specific": { 00:14:47.798 "passthru": { 00:14:47.798 "name": "pt1", 00:14:47.798 "base_bdev_name": "malloc1" 00:14:47.798 } 00:14:47.798 } 00:14:47.798 }' 00:14:47.798 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:47.798 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:47.798 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:47.798 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:48.117 10:56:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:48.117 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:48.117 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:48.117 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:48.117 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:48.117 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.117 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.117 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:48.117 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:48.117 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:48.117 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:48.375 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:48.375 "name": "pt2", 00:14:48.375 "aliases": [ 00:14:48.375 "00000000-0000-0000-0000-000000000002" 00:14:48.375 ], 00:14:48.375 "product_name": "passthru", 00:14:48.375 "block_size": 512, 00:14:48.375 "num_blocks": 65536, 00:14:48.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:48.375 "assigned_rate_limits": { 00:14:48.375 "rw_ios_per_sec": 0, 00:14:48.375 "rw_mbytes_per_sec": 0, 00:14:48.375 "r_mbytes_per_sec": 0, 00:14:48.375 "w_mbytes_per_sec": 0 00:14:48.375 }, 00:14:48.375 "claimed": true, 00:14:48.375 "claim_type": "exclusive_write", 00:14:48.375 "zoned": false, 00:14:48.375 "supported_io_types": { 00:14:48.375 "read": true, 00:14:48.375 "write": true, 00:14:48.375 "unmap": true, 00:14:48.375 "flush": true, 00:14:48.375 "reset": true, 00:14:48.375 "nvme_admin": false, 00:14:48.375 "nvme_io": false, 00:14:48.375 "nvme_io_md": false, 00:14:48.375 "write_zeroes": true, 00:14:48.375 "zcopy": true, 00:14:48.375 "get_zone_info": false, 00:14:48.375 "zone_management": false, 00:14:48.375 "zone_append": false, 00:14:48.375 "compare": false, 00:14:48.375 "compare_and_write": false, 00:14:48.375 "abort": true, 00:14:48.375 "seek_hole": false, 00:14:48.376 "seek_data": false, 00:14:48.376 "copy": true, 00:14:48.376 "nvme_iov_md": false 00:14:48.376 }, 00:14:48.376 "memory_domains": [ 00:14:48.376 { 00:14:48.376 "dma_device_id": "system", 00:14:48.376 "dma_device_type": 1 00:14:48.376 }, 00:14:48.376 { 00:14:48.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.376 "dma_device_type": 2 00:14:48.376 } 00:14:48.376 ], 00:14:48.376 "driver_specific": { 00:14:48.376 "passthru": { 00:14:48.376 "name": "pt2", 00:14:48.376 "base_bdev_name": "malloc2" 00:14:48.376 } 00:14:48.376 } 00:14:48.376 }' 00:14:48.376 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:48.376 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:48.376 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:48.376 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:48.376 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:48.635 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:48.635 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:48.635 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:48.635 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:48.635 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.635 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:48.635 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:48.635 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:48.635 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:14:48.894 [2024-07-25 10:56:55.844982] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.894 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=e6ebf65f-0b22-4b68-9609-ce6e11ef1d74 00:14:48.894 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z e6ebf65f-0b22-4b68-9609-ce6e11ef1d74 ']' 00:14:48.894 10:56:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:49.153 [2024-07-25 10:56:56.069231] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:49.153 [2024-07-25 10:56:56.069261] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.153 [2024-07-25 10:56:56.069349] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.153 [2024-07-25 10:56:56.069407] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:49.153 [2024-07-25 10:56:56.069430] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:49.153 10:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.153 10:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:14:49.412 10:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:14:49.412 10:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:14:49.412 10:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:49.412 10:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:49.412 10:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:14:49.412 10:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:49.671 10:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:49.671 10:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:49.930 10:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:14:49.930 10:56:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:49.930 10:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:49.930 10:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:49.931 10:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:49.931 10:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.931 10:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:49.931 10:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.931 10:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:49.931 10:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:49.931 10:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:14:49.931 10:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:14:49.931 10:56:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:50.190 [2024-07-25 10:56:57.172182] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:50.190 [2024-07-25 10:56:57.174485] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:50.190 [2024-07-25 10:56:57.174555] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:50.190 [2024-07-25 10:56:57.174610] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:50.190 [2024-07-25 10:56:57.174637] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.190 [2024-07-25 10:56:57.174654] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:50.190 request: 00:14:50.190 { 00:14:50.190 "name": "raid_bdev1", 00:14:50.190 "raid_level": "raid0", 00:14:50.190 "base_bdevs": [ 00:14:50.190 "malloc1", 00:14:50.190 "malloc2" 00:14:50.190 ], 00:14:50.190 "strip_size_kb": 64, 00:14:50.190 "superblock": false, 00:14:50.190 "method": "bdev_raid_create", 00:14:50.190 "req_id": 1 00:14:50.190 } 00:14:50.190 Got JSON-RPC error response 00:14:50.190 response: 00:14:50.190 { 00:14:50.190 "code": -17, 00:14:50.190 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:50.190 } 00:14:50.190 10:56:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:50.190 10:56:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:50.190 10:56:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:50.190 10:56:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:50.190 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.190 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:14:50.449 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:14:50.449 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:14:50.449 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:50.708 [2024-07-25 10:56:57.617348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:50.708 [2024-07-25 10:56:57.617413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.708 [2024-07-25 10:56:57.617442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:14:50.708 [2024-07-25 10:56:57.617460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.708 [2024-07-25 10:56:57.620207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.708 [2024-07-25 10:56:57.620246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:50.708 [2024-07-25 10:56:57.620339] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:50.708 [2024-07-25 10:56:57.620422] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:50.708 pt1 00:14:50.708 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:50.708 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:50.708 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:50.708 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:50.708 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.708 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:50.708 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.708 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.708 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.708 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.708 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.708 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.967 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.967 "name": "raid_bdev1", 00:14:50.967 "uuid": "e6ebf65f-0b22-4b68-9609-ce6e11ef1d74", 00:14:50.967 "strip_size_kb": 64, 00:14:50.967 "state": "configuring", 00:14:50.967 "raid_level": "raid0", 00:14:50.967 "superblock": true, 00:14:50.967 "num_base_bdevs": 2, 00:14:50.967 "num_base_bdevs_discovered": 1, 00:14:50.967 "num_base_bdevs_operational": 2, 00:14:50.967 "base_bdevs_list": [ 00:14:50.967 { 00:14:50.967 "name": "pt1", 00:14:50.967 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:50.967 "is_configured": true, 00:14:50.967 "data_offset": 2048, 00:14:50.967 "data_size": 63488 00:14:50.967 }, 00:14:50.967 { 00:14:50.967 "name": null, 00:14:50.967 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:50.967 "is_configured": false, 00:14:50.967 "data_offset": 2048, 00:14:50.967 "data_size": 63488 00:14:50.967 } 00:14:50.967 ] 00:14:50.967 }' 00:14:50.967 10:56:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.967 10:56:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:51.534 [2024-07-25 10:56:58.628095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:51.534 [2024-07-25 10:56:58.628171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.534 [2024-07-25 10:56:58.628198] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:14:51.534 [2024-07-25 10:56:58.628217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.534 [2024-07-25 10:56:58.628784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.534 [2024-07-25 10:56:58.628814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:51.534 [2024-07-25 10:56:58.628905] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:51.534 [2024-07-25 10:56:58.628942] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:51.534 [2024-07-25 10:56:58.629106] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:51.534 [2024-07-25 10:56:58.629124] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:51.534 [2024-07-25 10:56:58.629425] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:14:51.534 [2024-07-25 10:56:58.629640] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:51.534 [2024-07-25 10:56:58.629655] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:51.534 [2024-07-25 10:56:58.629856] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.534 pt2 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.534 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.793 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.793 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.793 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.793 "name": "raid_bdev1", 00:14:51.793 "uuid": "e6ebf65f-0b22-4b68-9609-ce6e11ef1d74", 00:14:51.793 "strip_size_kb": 64, 00:14:51.793 "state": "online", 00:14:51.793 "raid_level": "raid0", 00:14:51.793 "superblock": true, 00:14:51.793 "num_base_bdevs": 2, 00:14:51.793 "num_base_bdevs_discovered": 2, 00:14:51.793 "num_base_bdevs_operational": 2, 00:14:51.793 "base_bdevs_list": [ 00:14:51.793 { 00:14:51.793 "name": "pt1", 00:14:51.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:51.793 "is_configured": true, 00:14:51.793 "data_offset": 2048, 00:14:51.793 "data_size": 63488 00:14:51.793 }, 00:14:51.793 { 00:14:51.793 "name": "pt2", 00:14:51.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.793 "is_configured": true, 00:14:51.793 "data_offset": 2048, 00:14:51.793 "data_size": 63488 00:14:51.793 } 00:14:51.793 ] 00:14:51.793 }' 00:14:51.793 10:56:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.793 10:56:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.360 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:14:52.360 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:52.360 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:52.360 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:52.360 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:52.360 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:52.360 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:52.360 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:52.619 [2024-07-25 10:56:59.655326] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.619 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:52.619 "name": "raid_bdev1", 00:14:52.619 "aliases": [ 00:14:52.619 "e6ebf65f-0b22-4b68-9609-ce6e11ef1d74" 00:14:52.619 ], 00:14:52.619 "product_name": "Raid Volume", 00:14:52.619 "block_size": 512, 00:14:52.619 "num_blocks": 126976, 00:14:52.619 "uuid": "e6ebf65f-0b22-4b68-9609-ce6e11ef1d74", 00:14:52.619 "assigned_rate_limits": { 00:14:52.619 "rw_ios_per_sec": 0, 00:14:52.619 "rw_mbytes_per_sec": 0, 00:14:52.619 "r_mbytes_per_sec": 0, 00:14:52.619 "w_mbytes_per_sec": 0 00:14:52.619 }, 00:14:52.619 "claimed": false, 00:14:52.619 "zoned": false, 00:14:52.619 "supported_io_types": { 00:14:52.619 "read": true, 00:14:52.619 "write": true, 00:14:52.619 "unmap": true, 00:14:52.619 "flush": true, 00:14:52.619 "reset": true, 00:14:52.619 "nvme_admin": false, 00:14:52.619 "nvme_io": false, 00:14:52.619 "nvme_io_md": false, 00:14:52.619 "write_zeroes": true, 00:14:52.619 "zcopy": false, 00:14:52.619 "get_zone_info": false, 00:14:52.619 "zone_management": false, 00:14:52.619 "zone_append": false, 00:14:52.619 "compare": false, 00:14:52.619 "compare_and_write": false, 00:14:52.619 "abort": false, 00:14:52.619 "seek_hole": false, 00:14:52.619 "seek_data": false, 00:14:52.619 "copy": false, 00:14:52.619 "nvme_iov_md": false 00:14:52.619 }, 00:14:52.619 "memory_domains": [ 00:14:52.619 { 00:14:52.619 "dma_device_id": "system", 00:14:52.619 "dma_device_type": 1 00:14:52.619 }, 00:14:52.619 { 00:14:52.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.619 "dma_device_type": 2 00:14:52.619 }, 00:14:52.619 { 00:14:52.619 "dma_device_id": "system", 00:14:52.619 "dma_device_type": 1 00:14:52.619 }, 00:14:52.619 { 00:14:52.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.619 "dma_device_type": 2 00:14:52.619 } 00:14:52.619 ], 00:14:52.619 "driver_specific": { 00:14:52.619 "raid": { 00:14:52.619 "uuid": "e6ebf65f-0b22-4b68-9609-ce6e11ef1d74", 00:14:52.619 "strip_size_kb": 64, 00:14:52.619 "state": "online", 00:14:52.619 "raid_level": "raid0", 00:14:52.619 "superblock": true, 00:14:52.619 "num_base_bdevs": 2, 00:14:52.619 "num_base_bdevs_discovered": 2, 00:14:52.619 "num_base_bdevs_operational": 2, 00:14:52.619 "base_bdevs_list": [ 00:14:52.619 { 00:14:52.619 "name": "pt1", 00:14:52.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:52.619 "is_configured": true, 00:14:52.619 "data_offset": 2048, 00:14:52.619 "data_size": 63488 00:14:52.619 }, 00:14:52.619 { 00:14:52.619 "name": "pt2", 00:14:52.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.619 "is_configured": true, 00:14:52.619 "data_offset": 2048, 00:14:52.619 "data_size": 63488 00:14:52.619 } 00:14:52.619 ] 00:14:52.619 } 00:14:52.619 } 00:14:52.619 }' 00:14:52.619 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:52.619 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:52.619 pt2' 00:14:52.619 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:52.619 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:52.619 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:52.878 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:52.878 "name": "pt1", 00:14:52.878 "aliases": [ 00:14:52.878 "00000000-0000-0000-0000-000000000001" 00:14:52.878 ], 00:14:52.878 "product_name": "passthru", 00:14:52.878 "block_size": 512, 00:14:52.878 "num_blocks": 65536, 00:14:52.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:52.878 "assigned_rate_limits": { 00:14:52.878 "rw_ios_per_sec": 0, 00:14:52.878 "rw_mbytes_per_sec": 0, 00:14:52.878 "r_mbytes_per_sec": 0, 00:14:52.878 "w_mbytes_per_sec": 0 00:14:52.878 }, 00:14:52.878 "claimed": true, 00:14:52.878 "claim_type": "exclusive_write", 00:14:52.878 "zoned": false, 00:14:52.878 "supported_io_types": { 00:14:52.878 "read": true, 00:14:52.878 "write": true, 00:14:52.878 "unmap": true, 00:14:52.878 "flush": true, 00:14:52.878 "reset": true, 00:14:52.878 "nvme_admin": false, 00:14:52.878 "nvme_io": false, 00:14:52.878 "nvme_io_md": false, 00:14:52.878 "write_zeroes": true, 00:14:52.878 "zcopy": true, 00:14:52.878 "get_zone_info": false, 00:14:52.878 "zone_management": false, 00:14:52.878 "zone_append": false, 00:14:52.878 "compare": false, 00:14:52.878 "compare_and_write": false, 00:14:52.878 "abort": true, 00:14:52.878 "seek_hole": false, 00:14:52.878 "seek_data": false, 00:14:52.878 "copy": true, 00:14:52.879 "nvme_iov_md": false 00:14:52.879 }, 00:14:52.879 "memory_domains": [ 00:14:52.879 { 00:14:52.879 "dma_device_id": "system", 00:14:52.879 "dma_device_type": 1 00:14:52.879 }, 00:14:52.879 { 00:14:52.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.879 "dma_device_type": 2 00:14:52.879 } 00:14:52.879 ], 00:14:52.879 "driver_specific": { 00:14:52.879 "passthru": { 00:14:52.879 "name": "pt1", 00:14:52.879 "base_bdev_name": "malloc1" 00:14:52.879 } 00:14:52.879 } 00:14:52.879 }' 00:14:52.879 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:52.879 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:52.879 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:52.879 10:56:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.137 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.137 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:53.137 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.137 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.137 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:53.137 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.137 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.395 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:53.395 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:53.395 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:53.395 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:53.395 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:53.395 "name": "pt2", 00:14:53.395 "aliases": [ 00:14:53.395 "00000000-0000-0000-0000-000000000002" 00:14:53.395 ], 00:14:53.395 "product_name": "passthru", 00:14:53.395 "block_size": 512, 00:14:53.395 "num_blocks": 65536, 00:14:53.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.395 "assigned_rate_limits": { 00:14:53.395 "rw_ios_per_sec": 0, 00:14:53.395 "rw_mbytes_per_sec": 0, 00:14:53.395 "r_mbytes_per_sec": 0, 00:14:53.395 "w_mbytes_per_sec": 0 00:14:53.395 }, 00:14:53.395 "claimed": true, 00:14:53.395 "claim_type": "exclusive_write", 00:14:53.395 "zoned": false, 00:14:53.395 "supported_io_types": { 00:14:53.395 "read": true, 00:14:53.395 "write": true, 00:14:53.395 "unmap": true, 00:14:53.395 "flush": true, 00:14:53.395 "reset": true, 00:14:53.395 "nvme_admin": false, 00:14:53.395 "nvme_io": false, 00:14:53.395 "nvme_io_md": false, 00:14:53.395 "write_zeroes": true, 00:14:53.395 "zcopy": true, 00:14:53.395 "get_zone_info": false, 00:14:53.395 "zone_management": false, 00:14:53.395 "zone_append": false, 00:14:53.395 "compare": false, 00:14:53.395 "compare_and_write": false, 00:14:53.395 "abort": true, 00:14:53.395 "seek_hole": false, 00:14:53.395 "seek_data": false, 00:14:53.395 "copy": true, 00:14:53.395 "nvme_iov_md": false 00:14:53.395 }, 00:14:53.395 "memory_domains": [ 00:14:53.395 { 00:14:53.395 "dma_device_id": "system", 00:14:53.396 "dma_device_type": 1 00:14:53.396 }, 00:14:53.396 { 00:14:53.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.396 "dma_device_type": 2 00:14:53.396 } 00:14:53.396 ], 00:14:53.396 "driver_specific": { 00:14:53.396 "passthru": { 00:14:53.396 "name": "pt2", 00:14:53.396 "base_bdev_name": "malloc2" 00:14:53.396 } 00:14:53.396 } 00:14:53.396 }' 00:14:53.396 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.654 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:53.654 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:53.654 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.654 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:53.654 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:53.654 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.654 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:53.654 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:53.654 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.913 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:53.913 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:53.913 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:53.913 10:57:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:14:54.172 [2024-07-25 10:57:01.035016] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' e6ebf65f-0b22-4b68-9609-ce6e11ef1d74 '!=' e6ebf65f-0b22-4b68-9609-ce6e11ef1d74 ']' 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3554421 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3554421 ']' 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3554421 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3554421 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3554421' 00:14:54.172 killing process with pid 3554421 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3554421 00:14:54.172 [2024-07-25 10:57:01.115849] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:54.172 [2024-07-25 10:57:01.115950] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.172 10:57:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3554421 00:14:54.172 [2024-07-25 10:57:01.116005] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.172 [2024-07-25 10:57:01.116025] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:54.431 [2024-07-25 10:57:01.310154] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.337 10:57:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:14:56.337 00:14:56.337 real 0m12.107s 00:14:56.337 user 0m19.876s 00:14:56.337 sys 0m1.963s 00:14:56.337 10:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:56.337 10:57:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.337 ************************************ 00:14:56.337 END TEST raid_superblock_test 00:14:56.337 ************************************ 00:14:56.337 10:57:03 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:14:56.337 10:57:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:56.337 10:57:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:56.337 10:57:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:56.337 ************************************ 00:14:56.337 START TEST raid_read_error_test 00:14:56.337 ************************************ 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.U1g1XUcqEL 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3556746 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3556746 /var/tmp/spdk-raid.sock 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3556746 ']' 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:56.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:56.337 10:57:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.337 [2024-07-25 10:57:03.199002] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:56.337 [2024-07-25 10:57:03.199099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3556746 ] 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:01.0 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:01.1 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:01.2 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:01.3 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:01.4 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:01.5 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:01.6 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:01.7 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:02.0 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:02.1 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:02.2 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:02.3 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:02.4 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:02.5 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:02.6 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3d:02.7 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:01.0 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:01.1 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:01.2 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:01.3 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:01.4 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:01.5 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:01.6 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:01.7 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:02.0 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:02.1 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:02.2 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:02.3 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:02.4 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:02.5 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:02.6 cannot be used 00:14:56.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:14:56.337 EAL: Requested device 0000:3f:02.7 cannot be used 00:14:56.337 [2024-07-25 10:57:03.401548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.595 [2024-07-25 10:57:03.682047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.163 [2024-07-25 10:57:04.038083] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.163 [2024-07-25 10:57:04.038119] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.163 10:57:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:57.163 10:57:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:14:57.163 10:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:57.163 10:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:57.422 BaseBdev1_malloc 00:14:57.422 10:57:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:57.991 true 00:14:57.991 10:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:58.250 [2024-07-25 10:57:05.221297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:58.250 [2024-07-25 10:57:05.221367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.250 [2024-07-25 10:57:05.221397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:14:58.250 [2024-07-25 10:57:05.221420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.250 [2024-07-25 10:57:05.224275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.250 [2024-07-25 10:57:05.224315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:58.250 BaseBdev1 00:14:58.250 10:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:14:58.250 10:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:58.509 BaseBdev2_malloc 00:14:58.509 10:57:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:59.077 true 00:14:59.077 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:59.336 [2024-07-25 10:57:06.236428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:59.336 [2024-07-25 10:57:06.236501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.336 [2024-07-25 10:57:06.236530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:14:59.336 [2024-07-25 10:57:06.236552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.336 [2024-07-25 10:57:06.239430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.336 [2024-07-25 10:57:06.239468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:59.336 BaseBdev2 00:14:59.336 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:59.908 [2024-07-25 10:57:06.733839] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.908 [2024-07-25 10:57:06.736285] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.908 [2024-07-25 10:57:06.736512] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:59.908 [2024-07-25 10:57:06.736535] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:59.908 [2024-07-25 10:57:06.736911] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:14:59.908 [2024-07-25 10:57:06.737184] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:59.908 [2024-07-25 10:57:06.737201] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:59.908 [2024-07-25 10:57:06.737442] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.908 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:59.908 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:59.908 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:59.908 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:59.908 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.908 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:59.908 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.908 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.908 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.908 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.908 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.908 10:57:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.908 10:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.908 "name": "raid_bdev1", 00:14:59.908 "uuid": "84986fe0-d752-47c9-819d-af0a85f321d5", 00:14:59.908 "strip_size_kb": 64, 00:14:59.908 "state": "online", 00:14:59.908 "raid_level": "raid0", 00:14:59.908 "superblock": true, 00:14:59.908 "num_base_bdevs": 2, 00:14:59.908 "num_base_bdevs_discovered": 2, 00:14:59.908 "num_base_bdevs_operational": 2, 00:14:59.908 "base_bdevs_list": [ 00:14:59.908 { 00:14:59.908 "name": "BaseBdev1", 00:14:59.908 "uuid": "f8901a61-b539-55f5-93fe-387c2032b1e9", 00:14:59.908 "is_configured": true, 00:14:59.908 "data_offset": 2048, 00:14:59.908 "data_size": 63488 00:14:59.908 }, 00:14:59.908 { 00:14:59.908 "name": "BaseBdev2", 00:14:59.908 "uuid": "7de66fdc-1bbb-5710-b086-4be201e86217", 00:14:59.908 "is_configured": true, 00:14:59.908 "data_offset": 2048, 00:14:59.908 "data_size": 63488 00:14:59.908 } 00:14:59.908 ] 00:14:59.908 }' 00:14:59.908 10:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.908 10:57:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.490 10:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:00.490 10:57:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:00.750 [2024-07-25 10:57:07.658397] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.686 10:57:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.947 10:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:01.947 "name": "raid_bdev1", 00:15:01.947 "uuid": "84986fe0-d752-47c9-819d-af0a85f321d5", 00:15:01.947 "strip_size_kb": 64, 00:15:01.947 "state": "online", 00:15:01.947 "raid_level": "raid0", 00:15:01.947 "superblock": true, 00:15:01.947 "num_base_bdevs": 2, 00:15:01.947 "num_base_bdevs_discovered": 2, 00:15:01.947 "num_base_bdevs_operational": 2, 00:15:01.947 "base_bdevs_list": [ 00:15:01.947 { 00:15:01.947 "name": "BaseBdev1", 00:15:01.947 "uuid": "f8901a61-b539-55f5-93fe-387c2032b1e9", 00:15:01.947 "is_configured": true, 00:15:01.947 "data_offset": 2048, 00:15:01.947 "data_size": 63488 00:15:01.947 }, 00:15:01.947 { 00:15:01.947 "name": "BaseBdev2", 00:15:01.947 "uuid": "7de66fdc-1bbb-5710-b086-4be201e86217", 00:15:01.947 "is_configured": true, 00:15:01.947 "data_offset": 2048, 00:15:01.947 "data_size": 63488 00:15:01.947 } 00:15:01.947 ] 00:15:01.947 }' 00:15:01.947 10:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:01.947 10:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.516 10:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:02.775 [2024-07-25 10:57:09.813812] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.775 [2024-07-25 10:57:09.813855] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.775 [2024-07-25 10:57:09.817150] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.775 [2024-07-25 10:57:09.817206] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.775 [2024-07-25 10:57:09.817247] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.775 [2024-07-25 10:57:09.817266] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:02.775 0 00:15:02.775 10:57:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3556746 00:15:02.775 10:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3556746 ']' 00:15:02.775 10:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3556746 00:15:02.775 10:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:15:02.775 10:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.775 10:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3556746 00:15:02.775 10:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:02.775 10:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:02.775 10:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3556746' 00:15:02.775 killing process with pid 3556746 00:15:02.775 10:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3556746 00:15:02.775 [2024-07-25 10:57:09.889316] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.775 10:57:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3556746 00:15:03.034 [2024-07-25 10:57:09.995798] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:04.937 10:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.U1g1XUcqEL 00:15:04.937 10:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:04.937 10:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:04.937 10:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:15:04.937 10:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:15:04.937 10:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:04.937 10:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:04.937 10:57:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:15:04.937 00:15:04.937 real 0m8.677s 00:15:04.937 user 0m12.423s 00:15:04.937 sys 0m1.261s 00:15:04.937 10:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.937 10:57:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.937 ************************************ 00:15:04.937 END TEST raid_read_error_test 00:15:04.937 ************************************ 00:15:04.937 10:57:11 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:15:04.937 10:57:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:04.937 10:57:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.937 10:57:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:04.937 ************************************ 00:15:04.937 START TEST raid_write_error_test 00:15:04.937 ************************************ 00:15:04.937 10:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:15:04.937 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.f1JtxHoh7G 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3558721 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3558721 /var/tmp/spdk-raid.sock 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3558721 ']' 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:04.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.938 10:57:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.938 [2024-07-25 10:57:11.970912] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:04.938 [2024-07-25 10:57:11.971031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3558721 ] 00:15:05.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.197 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:05.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.197 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:05.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.197 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:05.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.197 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:05.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.197 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:05.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.197 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:05.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.197 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:05.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.197 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:05.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.197 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:05.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.197 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:05.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.197 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:05.198 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:05.198 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:05.198 [2024-07-25 10:57:12.195715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.457 [2024-07-25 10:57:12.479219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.025 [2024-07-25 10:57:12.836073] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.025 [2024-07-25 10:57:12.836109] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.025 10:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.025 10:57:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:06.025 10:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:06.025 10:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:06.284 BaseBdev1_malloc 00:15:06.284 10:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:06.543 true 00:15:06.543 10:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:06.802 [2024-07-25 10:57:13.738871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:06.802 [2024-07-25 10:57:13.738932] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.802 [2024-07-25 10:57:13.738959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:15:06.802 [2024-07-25 10:57:13.738981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.802 [2024-07-25 10:57:13.741777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.802 [2024-07-25 10:57:13.741816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:06.802 BaseBdev1 00:15:06.802 10:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:06.802 10:57:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:07.061 BaseBdev2_malloc 00:15:07.061 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:07.320 true 00:15:07.320 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:07.580 [2024-07-25 10:57:14.479637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:07.580 [2024-07-25 10:57:14.479699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.580 [2024-07-25 10:57:14.479726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:15:07.580 [2024-07-25 10:57:14.479748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.580 [2024-07-25 10:57:14.482568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.580 [2024-07-25 10:57:14.482606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:07.580 BaseBdev2 00:15:07.580 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:07.839 [2024-07-25 10:57:14.704325] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.839 [2024-07-25 10:57:14.706714] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.839 [2024-07-25 10:57:14.706940] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:07.839 [2024-07-25 10:57:14.706961] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:07.839 [2024-07-25 10:57:14.707315] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:07.839 [2024-07-25 10:57:14.707565] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:07.839 [2024-07-25 10:57:14.707581] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:07.839 [2024-07-25 10:57:14.707809] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.839 "name": "raid_bdev1", 00:15:07.839 "uuid": "9ae58cdf-f096-49a8-baea-898a4227b432", 00:15:07.839 "strip_size_kb": 64, 00:15:07.839 "state": "online", 00:15:07.839 "raid_level": "raid0", 00:15:07.839 "superblock": true, 00:15:07.839 "num_base_bdevs": 2, 00:15:07.839 "num_base_bdevs_discovered": 2, 00:15:07.839 "num_base_bdevs_operational": 2, 00:15:07.839 "base_bdevs_list": [ 00:15:07.839 { 00:15:07.839 "name": "BaseBdev1", 00:15:07.839 "uuid": "127cb8f5-7a32-5cff-b231-f7d59e857030", 00:15:07.839 "is_configured": true, 00:15:07.839 "data_offset": 2048, 00:15:07.839 "data_size": 63488 00:15:07.839 }, 00:15:07.839 { 00:15:07.839 "name": "BaseBdev2", 00:15:07.839 "uuid": "63fa0a4f-5c39-50c7-93c8-8aad40ec226d", 00:15:07.839 "is_configured": true, 00:15:07.839 "data_offset": 2048, 00:15:07.839 "data_size": 63488 00:15:07.839 } 00:15:07.839 ] 00:15:07.839 }' 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.839 10:57:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.407 10:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:08.407 10:57:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:08.666 [2024-07-25 10:57:15.624702] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:15:09.601 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.860 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.120 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.120 "name": "raid_bdev1", 00:15:10.120 "uuid": "9ae58cdf-f096-49a8-baea-898a4227b432", 00:15:10.120 "strip_size_kb": 64, 00:15:10.120 "state": "online", 00:15:10.120 "raid_level": "raid0", 00:15:10.120 "superblock": true, 00:15:10.120 "num_base_bdevs": 2, 00:15:10.120 "num_base_bdevs_discovered": 2, 00:15:10.120 "num_base_bdevs_operational": 2, 00:15:10.120 "base_bdevs_list": [ 00:15:10.120 { 00:15:10.120 "name": "BaseBdev1", 00:15:10.120 "uuid": "127cb8f5-7a32-5cff-b231-f7d59e857030", 00:15:10.120 "is_configured": true, 00:15:10.120 "data_offset": 2048, 00:15:10.120 "data_size": 63488 00:15:10.120 }, 00:15:10.120 { 00:15:10.120 "name": "BaseBdev2", 00:15:10.120 "uuid": "63fa0a4f-5c39-50c7-93c8-8aad40ec226d", 00:15:10.120 "is_configured": true, 00:15:10.120 "data_offset": 2048, 00:15:10.120 "data_size": 63488 00:15:10.120 } 00:15:10.120 ] 00:15:10.120 }' 00:15:10.120 10:57:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.120 10:57:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.687 10:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:10.687 [2024-07-25 10:57:17.707433] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:10.687 [2024-07-25 10:57:17.707475] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.687 [2024-07-25 10:57:17.710744] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.687 [2024-07-25 10:57:17.710795] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.687 [2024-07-25 10:57:17.710834] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.687 [2024-07-25 10:57:17.710860] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:10.687 0 00:15:10.687 10:57:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3558721 00:15:10.687 10:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3558721 ']' 00:15:10.687 10:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3558721 00:15:10.687 10:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:15:10.687 10:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:10.687 10:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3558721 00:15:10.687 10:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:10.687 10:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:10.687 10:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3558721' 00:15:10.687 killing process with pid 3558721 00:15:10.687 10:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3558721 00:15:10.687 [2024-07-25 10:57:17.779936] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.687 10:57:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3558721 00:15:10.946 [2024-07-25 10:57:17.879206] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:12.851 10:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.f1JtxHoh7G 00:15:12.851 10:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:12.851 10:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:12.851 10:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.48 00:15:12.851 10:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:15:12.851 10:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:12.851 10:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:12.851 10:57:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.48 != \0\.\0\0 ]] 00:15:12.851 00:15:12.851 real 0m7.781s 00:15:12.851 user 0m10.838s 00:15:12.851 sys 0m1.181s 00:15:12.851 10:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:12.851 10:57:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.851 ************************************ 00:15:12.851 END TEST raid_write_error_test 00:15:12.851 ************************************ 00:15:12.851 10:57:19 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:15:12.851 10:57:19 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:15:12.851 10:57:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:12.851 10:57:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:12.851 10:57:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:12.851 ************************************ 00:15:12.851 START TEST raid_state_function_test 00:15:12.851 ************************************ 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:12.851 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:12.852 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:12.852 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:12.852 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3560139 00:15:12.852 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3560139' 00:15:12.852 Process raid pid: 3560139 00:15:12.852 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:12.852 10:57:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3560139 /var/tmp/spdk-raid.sock 00:15:12.852 10:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3560139 ']' 00:15:12.852 10:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:12.852 10:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.852 10:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:12.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:12.852 10:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.852 10:57:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.852 [2024-07-25 10:57:19.827843] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:12.852 [2024-07-25 10:57:19.827958] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:12.852 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:12.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:12.852 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:12.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:12.852 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:12.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:12.852 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:12.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:12.852 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:12.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:12.852 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:12.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:12.852 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:12.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:12.852 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:12.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:12.852 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:12.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:12.852 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:12.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:12.852 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:12.852 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:12.852 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:13.112 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.112 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:13.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.113 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:13.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.113 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:13.113 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:13.113 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:13.113 [2024-07-25 10:57:20.059128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.373 [2024-07-25 10:57:20.345112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.632 [2024-07-25 10:57:20.675230] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.632 [2024-07-25 10:57:20.675268] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.891 10:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.891 10:57:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:13.891 10:57:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:14.150 [2024-07-25 10:57:21.046630] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:14.150 [2024-07-25 10:57:21.046687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:14.150 [2024-07-25 10:57:21.046702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.150 [2024-07-25 10:57:21.046719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.150 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:14.150 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:14.150 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:14.150 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:14.150 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:14.150 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:14.150 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:14.150 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:14.150 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:14.150 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:14.150 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.150 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.409 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:14.409 "name": "Existed_Raid", 00:15:14.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.409 "strip_size_kb": 64, 00:15:14.409 "state": "configuring", 00:15:14.409 "raid_level": "concat", 00:15:14.409 "superblock": false, 00:15:14.409 "num_base_bdevs": 2, 00:15:14.409 "num_base_bdevs_discovered": 0, 00:15:14.409 "num_base_bdevs_operational": 2, 00:15:14.409 "base_bdevs_list": [ 00:15:14.409 { 00:15:14.409 "name": "BaseBdev1", 00:15:14.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.409 "is_configured": false, 00:15:14.409 "data_offset": 0, 00:15:14.409 "data_size": 0 00:15:14.409 }, 00:15:14.409 { 00:15:14.409 "name": "BaseBdev2", 00:15:14.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.409 "is_configured": false, 00:15:14.409 "data_offset": 0, 00:15:14.409 "data_size": 0 00:15:14.409 } 00:15:14.409 ] 00:15:14.409 }' 00:15:14.409 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:14.409 10:57:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.976 10:57:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:14.976 [2024-07-25 10:57:22.073255] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.977 [2024-07-25 10:57:22.073301] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:14.977 10:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:15.235 [2024-07-25 10:57:22.301900] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.235 [2024-07-25 10:57:22.301947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.235 [2024-07-25 10:57:22.301961] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.235 [2024-07-25 10:57:22.301978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.235 10:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:15.494 [2024-07-25 10:57:22.590833] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.494 BaseBdev1 00:15:15.494 10:57:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:15.494 10:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:15.494 10:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:15.494 10:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:15.494 10:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:15.494 10:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:15.494 10:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:15.753 10:57:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:16.012 [ 00:15:16.012 { 00:15:16.012 "name": "BaseBdev1", 00:15:16.012 "aliases": [ 00:15:16.012 "d20590fc-7739-4666-bc94-ee390d054eef" 00:15:16.012 ], 00:15:16.012 "product_name": "Malloc disk", 00:15:16.012 "block_size": 512, 00:15:16.012 "num_blocks": 65536, 00:15:16.012 "uuid": "d20590fc-7739-4666-bc94-ee390d054eef", 00:15:16.012 "assigned_rate_limits": { 00:15:16.012 "rw_ios_per_sec": 0, 00:15:16.012 "rw_mbytes_per_sec": 0, 00:15:16.012 "r_mbytes_per_sec": 0, 00:15:16.012 "w_mbytes_per_sec": 0 00:15:16.012 }, 00:15:16.012 "claimed": true, 00:15:16.012 "claim_type": "exclusive_write", 00:15:16.012 "zoned": false, 00:15:16.012 "supported_io_types": { 00:15:16.012 "read": true, 00:15:16.012 "write": true, 00:15:16.012 "unmap": true, 00:15:16.012 "flush": true, 00:15:16.012 "reset": true, 00:15:16.012 "nvme_admin": false, 00:15:16.012 "nvme_io": false, 00:15:16.012 "nvme_io_md": false, 00:15:16.012 "write_zeroes": true, 00:15:16.012 "zcopy": true, 00:15:16.012 "get_zone_info": false, 00:15:16.012 "zone_management": false, 00:15:16.012 "zone_append": false, 00:15:16.012 "compare": false, 00:15:16.012 "compare_and_write": false, 00:15:16.012 "abort": true, 00:15:16.012 "seek_hole": false, 00:15:16.012 "seek_data": false, 00:15:16.012 "copy": true, 00:15:16.012 "nvme_iov_md": false 00:15:16.012 }, 00:15:16.012 "memory_domains": [ 00:15:16.012 { 00:15:16.012 "dma_device_id": "system", 00:15:16.012 "dma_device_type": 1 00:15:16.012 }, 00:15:16.012 { 00:15:16.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.012 "dma_device_type": 2 00:15:16.012 } 00:15:16.012 ], 00:15:16.012 "driver_specific": {} 00:15:16.012 } 00:15:16.012 ] 00:15:16.012 10:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:16.012 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:16.012 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:16.013 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:16.013 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:16.013 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:16.013 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:16.013 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:16.013 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:16.013 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:16.013 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:16.013 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.013 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.330 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.330 "name": "Existed_Raid", 00:15:16.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.330 "strip_size_kb": 64, 00:15:16.330 "state": "configuring", 00:15:16.330 "raid_level": "concat", 00:15:16.330 "superblock": false, 00:15:16.330 "num_base_bdevs": 2, 00:15:16.330 "num_base_bdevs_discovered": 1, 00:15:16.330 "num_base_bdevs_operational": 2, 00:15:16.330 "base_bdevs_list": [ 00:15:16.330 { 00:15:16.330 "name": "BaseBdev1", 00:15:16.330 "uuid": "d20590fc-7739-4666-bc94-ee390d054eef", 00:15:16.330 "is_configured": true, 00:15:16.330 "data_offset": 0, 00:15:16.330 "data_size": 65536 00:15:16.330 }, 00:15:16.330 { 00:15:16.330 "name": "BaseBdev2", 00:15:16.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.330 "is_configured": false, 00:15:16.331 "data_offset": 0, 00:15:16.331 "data_size": 0 00:15:16.331 } 00:15:16.331 ] 00:15:16.331 }' 00:15:16.331 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.331 10:57:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.948 10:57:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:16.948 [2024-07-25 10:57:24.046829] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:16.948 [2024-07-25 10:57:24.046892] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:16.948 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:17.207 [2024-07-25 10:57:24.275515] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:17.207 [2024-07-25 10:57:24.277833] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:17.207 [2024-07-25 10:57:24.277877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.207 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.466 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:17.466 "name": "Existed_Raid", 00:15:17.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.466 "strip_size_kb": 64, 00:15:17.466 "state": "configuring", 00:15:17.466 "raid_level": "concat", 00:15:17.466 "superblock": false, 00:15:17.466 "num_base_bdevs": 2, 00:15:17.466 "num_base_bdevs_discovered": 1, 00:15:17.466 "num_base_bdevs_operational": 2, 00:15:17.466 "base_bdevs_list": [ 00:15:17.466 { 00:15:17.466 "name": "BaseBdev1", 00:15:17.466 "uuid": "d20590fc-7739-4666-bc94-ee390d054eef", 00:15:17.466 "is_configured": true, 00:15:17.466 "data_offset": 0, 00:15:17.466 "data_size": 65536 00:15:17.466 }, 00:15:17.466 { 00:15:17.466 "name": "BaseBdev2", 00:15:17.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.466 "is_configured": false, 00:15:17.466 "data_offset": 0, 00:15:17.466 "data_size": 0 00:15:17.466 } 00:15:17.466 ] 00:15:17.466 }' 00:15:17.466 10:57:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:17.466 10:57:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.034 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:18.293 [2024-07-25 10:57:25.361901] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.293 [2024-07-25 10:57:25.361951] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:18.293 [2024-07-25 10:57:25.361965] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:18.293 [2024-07-25 10:57:25.362321] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:18.293 [2024-07-25 10:57:25.362565] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:18.293 [2024-07-25 10:57:25.362584] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:18.293 [2024-07-25 10:57:25.362947] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.293 BaseBdev2 00:15:18.293 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:18.293 10:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:18.293 10:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:18.293 10:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:18.293 10:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:18.293 10:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:18.293 10:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:18.552 10:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:18.811 [ 00:15:18.811 { 00:15:18.811 "name": "BaseBdev2", 00:15:18.811 "aliases": [ 00:15:18.811 "9a86abf1-7fd8-4089-80e6-acb521c77268" 00:15:18.811 ], 00:15:18.811 "product_name": "Malloc disk", 00:15:18.811 "block_size": 512, 00:15:18.811 "num_blocks": 65536, 00:15:18.811 "uuid": "9a86abf1-7fd8-4089-80e6-acb521c77268", 00:15:18.811 "assigned_rate_limits": { 00:15:18.811 "rw_ios_per_sec": 0, 00:15:18.811 "rw_mbytes_per_sec": 0, 00:15:18.811 "r_mbytes_per_sec": 0, 00:15:18.811 "w_mbytes_per_sec": 0 00:15:18.811 }, 00:15:18.811 "claimed": true, 00:15:18.811 "claim_type": "exclusive_write", 00:15:18.811 "zoned": false, 00:15:18.811 "supported_io_types": { 00:15:18.811 "read": true, 00:15:18.811 "write": true, 00:15:18.811 "unmap": true, 00:15:18.811 "flush": true, 00:15:18.811 "reset": true, 00:15:18.811 "nvme_admin": false, 00:15:18.811 "nvme_io": false, 00:15:18.811 "nvme_io_md": false, 00:15:18.811 "write_zeroes": true, 00:15:18.811 "zcopy": true, 00:15:18.811 "get_zone_info": false, 00:15:18.811 "zone_management": false, 00:15:18.811 "zone_append": false, 00:15:18.811 "compare": false, 00:15:18.811 "compare_and_write": false, 00:15:18.811 "abort": true, 00:15:18.811 "seek_hole": false, 00:15:18.811 "seek_data": false, 00:15:18.811 "copy": true, 00:15:18.811 "nvme_iov_md": false 00:15:18.811 }, 00:15:18.811 "memory_domains": [ 00:15:18.811 { 00:15:18.811 "dma_device_id": "system", 00:15:18.811 "dma_device_type": 1 00:15:18.811 }, 00:15:18.811 { 00:15:18.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.811 "dma_device_type": 2 00:15:18.811 } 00:15:18.811 ], 00:15:18.811 "driver_specific": {} 00:15:18.811 } 00:15:18.811 ] 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.811 10:57:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.071 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.071 "name": "Existed_Raid", 00:15:19.071 "uuid": "e322dc9e-f950-4e6f-b076-e39d8afb7ce6", 00:15:19.071 "strip_size_kb": 64, 00:15:19.071 "state": "online", 00:15:19.071 "raid_level": "concat", 00:15:19.071 "superblock": false, 00:15:19.071 "num_base_bdevs": 2, 00:15:19.071 "num_base_bdevs_discovered": 2, 00:15:19.071 "num_base_bdevs_operational": 2, 00:15:19.071 "base_bdevs_list": [ 00:15:19.071 { 00:15:19.071 "name": "BaseBdev1", 00:15:19.071 "uuid": "d20590fc-7739-4666-bc94-ee390d054eef", 00:15:19.071 "is_configured": true, 00:15:19.071 "data_offset": 0, 00:15:19.071 "data_size": 65536 00:15:19.071 }, 00:15:19.071 { 00:15:19.071 "name": "BaseBdev2", 00:15:19.071 "uuid": "9a86abf1-7fd8-4089-80e6-acb521c77268", 00:15:19.071 "is_configured": true, 00:15:19.071 "data_offset": 0, 00:15:19.071 "data_size": 65536 00:15:19.071 } 00:15:19.071 ] 00:15:19.071 }' 00:15:19.071 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.071 10:57:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.639 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:19.639 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:19.639 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:19.639 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:19.639 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:19.639 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:19.639 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:19.639 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:19.898 [2024-07-25 10:57:26.830419] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.898 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:19.898 "name": "Existed_Raid", 00:15:19.898 "aliases": [ 00:15:19.898 "e322dc9e-f950-4e6f-b076-e39d8afb7ce6" 00:15:19.898 ], 00:15:19.898 "product_name": "Raid Volume", 00:15:19.898 "block_size": 512, 00:15:19.898 "num_blocks": 131072, 00:15:19.898 "uuid": "e322dc9e-f950-4e6f-b076-e39d8afb7ce6", 00:15:19.898 "assigned_rate_limits": { 00:15:19.898 "rw_ios_per_sec": 0, 00:15:19.898 "rw_mbytes_per_sec": 0, 00:15:19.898 "r_mbytes_per_sec": 0, 00:15:19.898 "w_mbytes_per_sec": 0 00:15:19.898 }, 00:15:19.898 "claimed": false, 00:15:19.898 "zoned": false, 00:15:19.898 "supported_io_types": { 00:15:19.898 "read": true, 00:15:19.898 "write": true, 00:15:19.898 "unmap": true, 00:15:19.898 "flush": true, 00:15:19.898 "reset": true, 00:15:19.898 "nvme_admin": false, 00:15:19.898 "nvme_io": false, 00:15:19.898 "nvme_io_md": false, 00:15:19.898 "write_zeroes": true, 00:15:19.898 "zcopy": false, 00:15:19.898 "get_zone_info": false, 00:15:19.898 "zone_management": false, 00:15:19.898 "zone_append": false, 00:15:19.898 "compare": false, 00:15:19.898 "compare_and_write": false, 00:15:19.898 "abort": false, 00:15:19.898 "seek_hole": false, 00:15:19.898 "seek_data": false, 00:15:19.898 "copy": false, 00:15:19.898 "nvme_iov_md": false 00:15:19.898 }, 00:15:19.898 "memory_domains": [ 00:15:19.898 { 00:15:19.898 "dma_device_id": "system", 00:15:19.898 "dma_device_type": 1 00:15:19.898 }, 00:15:19.898 { 00:15:19.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.898 "dma_device_type": 2 00:15:19.898 }, 00:15:19.898 { 00:15:19.898 "dma_device_id": "system", 00:15:19.898 "dma_device_type": 1 00:15:19.898 }, 00:15:19.898 { 00:15:19.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.898 "dma_device_type": 2 00:15:19.898 } 00:15:19.898 ], 00:15:19.898 "driver_specific": { 00:15:19.898 "raid": { 00:15:19.898 "uuid": "e322dc9e-f950-4e6f-b076-e39d8afb7ce6", 00:15:19.898 "strip_size_kb": 64, 00:15:19.898 "state": "online", 00:15:19.898 "raid_level": "concat", 00:15:19.898 "superblock": false, 00:15:19.898 "num_base_bdevs": 2, 00:15:19.898 "num_base_bdevs_discovered": 2, 00:15:19.898 "num_base_bdevs_operational": 2, 00:15:19.898 "base_bdevs_list": [ 00:15:19.898 { 00:15:19.898 "name": "BaseBdev1", 00:15:19.898 "uuid": "d20590fc-7739-4666-bc94-ee390d054eef", 00:15:19.898 "is_configured": true, 00:15:19.898 "data_offset": 0, 00:15:19.898 "data_size": 65536 00:15:19.898 }, 00:15:19.898 { 00:15:19.898 "name": "BaseBdev2", 00:15:19.898 "uuid": "9a86abf1-7fd8-4089-80e6-acb521c77268", 00:15:19.898 "is_configured": true, 00:15:19.898 "data_offset": 0, 00:15:19.898 "data_size": 65536 00:15:19.898 } 00:15:19.898 ] 00:15:19.898 } 00:15:19.898 } 00:15:19.898 }' 00:15:19.899 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:19.899 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:19.899 BaseBdev2' 00:15:19.899 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:19.899 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:19.899 10:57:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:20.157 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:20.157 "name": "BaseBdev1", 00:15:20.157 "aliases": [ 00:15:20.157 "d20590fc-7739-4666-bc94-ee390d054eef" 00:15:20.157 ], 00:15:20.157 "product_name": "Malloc disk", 00:15:20.157 "block_size": 512, 00:15:20.157 "num_blocks": 65536, 00:15:20.157 "uuid": "d20590fc-7739-4666-bc94-ee390d054eef", 00:15:20.157 "assigned_rate_limits": { 00:15:20.157 "rw_ios_per_sec": 0, 00:15:20.157 "rw_mbytes_per_sec": 0, 00:15:20.157 "r_mbytes_per_sec": 0, 00:15:20.157 "w_mbytes_per_sec": 0 00:15:20.157 }, 00:15:20.157 "claimed": true, 00:15:20.157 "claim_type": "exclusive_write", 00:15:20.157 "zoned": false, 00:15:20.157 "supported_io_types": { 00:15:20.157 "read": true, 00:15:20.157 "write": true, 00:15:20.157 "unmap": true, 00:15:20.157 "flush": true, 00:15:20.157 "reset": true, 00:15:20.157 "nvme_admin": false, 00:15:20.157 "nvme_io": false, 00:15:20.157 "nvme_io_md": false, 00:15:20.157 "write_zeroes": true, 00:15:20.157 "zcopy": true, 00:15:20.157 "get_zone_info": false, 00:15:20.157 "zone_management": false, 00:15:20.157 "zone_append": false, 00:15:20.157 "compare": false, 00:15:20.157 "compare_and_write": false, 00:15:20.157 "abort": true, 00:15:20.157 "seek_hole": false, 00:15:20.157 "seek_data": false, 00:15:20.157 "copy": true, 00:15:20.157 "nvme_iov_md": false 00:15:20.157 }, 00:15:20.157 "memory_domains": [ 00:15:20.157 { 00:15:20.157 "dma_device_id": "system", 00:15:20.157 "dma_device_type": 1 00:15:20.157 }, 00:15:20.157 { 00:15:20.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.157 "dma_device_type": 2 00:15:20.157 } 00:15:20.157 ], 00:15:20.157 "driver_specific": {} 00:15:20.157 }' 00:15:20.157 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:20.157 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:20.157 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:20.157 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:20.157 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:20.415 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:20.415 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:20.415 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:20.415 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:20.415 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:20.416 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:20.416 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:20.416 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:20.416 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:20.416 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:20.674 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:20.674 "name": "BaseBdev2", 00:15:20.674 "aliases": [ 00:15:20.674 "9a86abf1-7fd8-4089-80e6-acb521c77268" 00:15:20.674 ], 00:15:20.674 "product_name": "Malloc disk", 00:15:20.674 "block_size": 512, 00:15:20.674 "num_blocks": 65536, 00:15:20.674 "uuid": "9a86abf1-7fd8-4089-80e6-acb521c77268", 00:15:20.674 "assigned_rate_limits": { 00:15:20.674 "rw_ios_per_sec": 0, 00:15:20.674 "rw_mbytes_per_sec": 0, 00:15:20.674 "r_mbytes_per_sec": 0, 00:15:20.674 "w_mbytes_per_sec": 0 00:15:20.674 }, 00:15:20.674 "claimed": true, 00:15:20.674 "claim_type": "exclusive_write", 00:15:20.674 "zoned": false, 00:15:20.674 "supported_io_types": { 00:15:20.674 "read": true, 00:15:20.674 "write": true, 00:15:20.674 "unmap": true, 00:15:20.674 "flush": true, 00:15:20.674 "reset": true, 00:15:20.674 "nvme_admin": false, 00:15:20.674 "nvme_io": false, 00:15:20.674 "nvme_io_md": false, 00:15:20.674 "write_zeroes": true, 00:15:20.674 "zcopy": true, 00:15:20.674 "get_zone_info": false, 00:15:20.674 "zone_management": false, 00:15:20.674 "zone_append": false, 00:15:20.674 "compare": false, 00:15:20.674 "compare_and_write": false, 00:15:20.674 "abort": true, 00:15:20.674 "seek_hole": false, 00:15:20.674 "seek_data": false, 00:15:20.674 "copy": true, 00:15:20.674 "nvme_iov_md": false 00:15:20.674 }, 00:15:20.674 "memory_domains": [ 00:15:20.674 { 00:15:20.674 "dma_device_id": "system", 00:15:20.674 "dma_device_type": 1 00:15:20.674 }, 00:15:20.674 { 00:15:20.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.674 "dma_device_type": 2 00:15:20.674 } 00:15:20.674 ], 00:15:20.674 "driver_specific": {} 00:15:20.674 }' 00:15:20.674 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:20.674 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:20.674 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:20.674 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:20.932 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:20.932 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:20.932 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:20.932 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:20.932 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:20.932 10:57:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:20.932 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:20.932 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:20.932 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:21.190 [2024-07-25 10:57:28.262011] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.190 [2024-07-25 10:57:28.262052] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.190 [2024-07-25 10:57:28.262114] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.449 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:21.449 "name": "Existed_Raid", 00:15:21.449 "uuid": "e322dc9e-f950-4e6f-b076-e39d8afb7ce6", 00:15:21.449 "strip_size_kb": 64, 00:15:21.449 "state": "offline", 00:15:21.450 "raid_level": "concat", 00:15:21.450 "superblock": false, 00:15:21.450 "num_base_bdevs": 2, 00:15:21.450 "num_base_bdevs_discovered": 1, 00:15:21.450 "num_base_bdevs_operational": 1, 00:15:21.450 "base_bdevs_list": [ 00:15:21.450 { 00:15:21.450 "name": null, 00:15:21.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.450 "is_configured": false, 00:15:21.450 "data_offset": 0, 00:15:21.450 "data_size": 65536 00:15:21.450 }, 00:15:21.450 { 00:15:21.450 "name": "BaseBdev2", 00:15:21.450 "uuid": "9a86abf1-7fd8-4089-80e6-acb521c77268", 00:15:21.450 "is_configured": true, 00:15:21.450 "data_offset": 0, 00:15:21.450 "data_size": 65536 00:15:21.450 } 00:15:21.450 ] 00:15:21.450 }' 00:15:21.450 10:57:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:21.450 10:57:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.017 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:22.017 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:22.017 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.017 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:22.276 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:22.276 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:22.276 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:22.536 [2024-07-25 10:57:29.564365] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:22.536 [2024-07-25 10:57:29.564431] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:22.795 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:22.795 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:22.795 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.795 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:23.054 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:23.054 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:23.054 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:23.054 10:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3560139 00:15:23.054 10:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3560139 ']' 00:15:23.054 10:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3560139 00:15:23.054 10:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:23.054 10:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.054 10:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3560139 00:15:23.054 10:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:23.054 10:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:23.054 10:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3560139' 00:15:23.054 killing process with pid 3560139 00:15:23.054 10:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3560139 00:15:23.054 [2024-07-25 10:57:30.001955] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.054 10:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3560139 00:15:23.054 [2024-07-25 10:57:30.025431] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:24.959 00:15:24.959 real 0m12.007s 00:15:24.959 user 0m19.520s 00:15:24.959 sys 0m2.093s 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.959 ************************************ 00:15:24.959 END TEST raid_state_function_test 00:15:24.959 ************************************ 00:15:24.959 10:57:31 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:15:24.959 10:57:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:24.959 10:57:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:24.959 10:57:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.959 ************************************ 00:15:24.959 START TEST raid_state_function_test_sb 00:15:24.959 ************************************ 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3562472 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3562472' 00:15:24.959 Process raid pid: 3562472 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3562472 /var/tmp/spdk-raid.sock 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3562472 ']' 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:24.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:24.959 10:57:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.959 [2024-07-25 10:57:31.928156] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:24.959 [2024-07-25 10:57:31.928266] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.959 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:24.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.959 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:24.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.959 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:24.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.959 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:24.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.959 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:24.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.959 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:24.959 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:24.960 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:24.960 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:25.219 [2024-07-25 10:57:32.156292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.478 [2024-07-25 10:57:32.452287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.737 [2024-07-25 10:57:32.807027] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.737 [2024-07-25 10:57:32.807061] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.996 10:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.996 10:57:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:25.996 10:57:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:26.255 [2024-07-25 10:57:33.190585] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.255 [2024-07-25 10:57:33.190645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.255 [2024-07-25 10:57:33.190659] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.255 [2024-07-25 10:57:33.190675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.255 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:26.255 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.255 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:26.255 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:26.255 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:26.255 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:26.255 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.255 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.255 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.255 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.255 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.255 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.514 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.514 "name": "Existed_Raid", 00:15:26.514 "uuid": "b6463f0e-2303-4c70-9853-76633b4056d7", 00:15:26.514 "strip_size_kb": 64, 00:15:26.514 "state": "configuring", 00:15:26.514 "raid_level": "concat", 00:15:26.514 "superblock": true, 00:15:26.514 "num_base_bdevs": 2, 00:15:26.514 "num_base_bdevs_discovered": 0, 00:15:26.514 "num_base_bdevs_operational": 2, 00:15:26.514 "base_bdevs_list": [ 00:15:26.514 { 00:15:26.514 "name": "BaseBdev1", 00:15:26.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.514 "is_configured": false, 00:15:26.514 "data_offset": 0, 00:15:26.514 "data_size": 0 00:15:26.514 }, 00:15:26.514 { 00:15:26.514 "name": "BaseBdev2", 00:15:26.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.514 "is_configured": false, 00:15:26.514 "data_offset": 0, 00:15:26.514 "data_size": 0 00:15:26.514 } 00:15:26.514 ] 00:15:26.514 }' 00:15:26.514 10:57:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.514 10:57:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.083 10:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:27.341 [2024-07-25 10:57:34.213182] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.341 [2024-07-25 10:57:34.213221] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:27.341 10:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:27.341 [2024-07-25 10:57:34.441849] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.341 [2024-07-25 10:57:34.441891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.341 [2024-07-25 10:57:34.441905] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.341 [2024-07-25 10:57:34.441922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.599 10:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:27.858 [2024-07-25 10:57:34.724715] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.858 BaseBdev1 00:15:27.858 10:57:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:27.858 10:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:27.858 10:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:27.858 10:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:27.858 10:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:27.858 10:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:27.858 10:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:27.858 10:57:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:28.116 [ 00:15:28.116 { 00:15:28.116 "name": "BaseBdev1", 00:15:28.116 "aliases": [ 00:15:28.116 "33523a76-346f-42ce-bc28-ed867a58b7d2" 00:15:28.116 ], 00:15:28.116 "product_name": "Malloc disk", 00:15:28.116 "block_size": 512, 00:15:28.116 "num_blocks": 65536, 00:15:28.116 "uuid": "33523a76-346f-42ce-bc28-ed867a58b7d2", 00:15:28.116 "assigned_rate_limits": { 00:15:28.116 "rw_ios_per_sec": 0, 00:15:28.116 "rw_mbytes_per_sec": 0, 00:15:28.116 "r_mbytes_per_sec": 0, 00:15:28.116 "w_mbytes_per_sec": 0 00:15:28.116 }, 00:15:28.116 "claimed": true, 00:15:28.116 "claim_type": "exclusive_write", 00:15:28.116 "zoned": false, 00:15:28.116 "supported_io_types": { 00:15:28.116 "read": true, 00:15:28.116 "write": true, 00:15:28.116 "unmap": true, 00:15:28.116 "flush": true, 00:15:28.116 "reset": true, 00:15:28.116 "nvme_admin": false, 00:15:28.116 "nvme_io": false, 00:15:28.116 "nvme_io_md": false, 00:15:28.116 "write_zeroes": true, 00:15:28.116 "zcopy": true, 00:15:28.116 "get_zone_info": false, 00:15:28.116 "zone_management": false, 00:15:28.116 "zone_append": false, 00:15:28.116 "compare": false, 00:15:28.116 "compare_and_write": false, 00:15:28.116 "abort": true, 00:15:28.116 "seek_hole": false, 00:15:28.116 "seek_data": false, 00:15:28.116 "copy": true, 00:15:28.116 "nvme_iov_md": false 00:15:28.116 }, 00:15:28.116 "memory_domains": [ 00:15:28.116 { 00:15:28.116 "dma_device_id": "system", 00:15:28.116 "dma_device_type": 1 00:15:28.116 }, 00:15:28.116 { 00:15:28.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.116 "dma_device_type": 2 00:15:28.116 } 00:15:28.116 ], 00:15:28.116 "driver_specific": {} 00:15:28.116 } 00:15:28.116 ] 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.116 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.375 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.375 "name": "Existed_Raid", 00:15:28.375 "uuid": "a00e8eee-2d6a-41f9-83b6-3702d6492629", 00:15:28.375 "strip_size_kb": 64, 00:15:28.375 "state": "configuring", 00:15:28.375 "raid_level": "concat", 00:15:28.375 "superblock": true, 00:15:28.375 "num_base_bdevs": 2, 00:15:28.375 "num_base_bdevs_discovered": 1, 00:15:28.375 "num_base_bdevs_operational": 2, 00:15:28.375 "base_bdevs_list": [ 00:15:28.375 { 00:15:28.375 "name": "BaseBdev1", 00:15:28.375 "uuid": "33523a76-346f-42ce-bc28-ed867a58b7d2", 00:15:28.375 "is_configured": true, 00:15:28.375 "data_offset": 2048, 00:15:28.375 "data_size": 63488 00:15:28.375 }, 00:15:28.375 { 00:15:28.375 "name": "BaseBdev2", 00:15:28.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.375 "is_configured": false, 00:15:28.375 "data_offset": 0, 00:15:28.375 "data_size": 0 00:15:28.375 } 00:15:28.375 ] 00:15:28.375 }' 00:15:28.375 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.375 10:57:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.944 10:57:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:29.202 [2024-07-25 10:57:36.208769] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.202 [2024-07-25 10:57:36.208824] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:29.202 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:29.461 [2024-07-25 10:57:36.437470] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.461 [2024-07-25 10:57:36.439774] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.461 [2024-07-25 10:57:36.439819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.461 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.720 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.720 "name": "Existed_Raid", 00:15:29.720 "uuid": "d82d4c49-a520-4e48-8c99-1a12724c407f", 00:15:29.720 "strip_size_kb": 64, 00:15:29.720 "state": "configuring", 00:15:29.720 "raid_level": "concat", 00:15:29.720 "superblock": true, 00:15:29.720 "num_base_bdevs": 2, 00:15:29.720 "num_base_bdevs_discovered": 1, 00:15:29.720 "num_base_bdevs_operational": 2, 00:15:29.720 "base_bdevs_list": [ 00:15:29.720 { 00:15:29.720 "name": "BaseBdev1", 00:15:29.720 "uuid": "33523a76-346f-42ce-bc28-ed867a58b7d2", 00:15:29.720 "is_configured": true, 00:15:29.720 "data_offset": 2048, 00:15:29.720 "data_size": 63488 00:15:29.720 }, 00:15:29.720 { 00:15:29.720 "name": "BaseBdev2", 00:15:29.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.720 "is_configured": false, 00:15:29.720 "data_offset": 0, 00:15:29.720 "data_size": 0 00:15:29.720 } 00:15:29.720 ] 00:15:29.720 }' 00:15:29.720 10:57:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.720 10:57:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.287 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:30.575 [2024-07-25 10:57:37.511299] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.575 [2024-07-25 10:57:37.511571] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:30.575 [2024-07-25 10:57:37.511595] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:30.575 [2024-07-25 10:57:37.511916] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:30.575 [2024-07-25 10:57:37.512127] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:30.575 [2024-07-25 10:57:37.512153] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:30.575 [2024-07-25 10:57:37.512343] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.575 BaseBdev2 00:15:30.575 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:30.576 10:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:30.576 10:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:30.576 10:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:30.576 10:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:30.576 10:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:30.576 10:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:30.834 [ 00:15:30.834 { 00:15:30.834 "name": "BaseBdev2", 00:15:30.834 "aliases": [ 00:15:30.834 "3d8eec63-d12a-4700-ab67-5a10ca9da0de" 00:15:30.834 ], 00:15:30.834 "product_name": "Malloc disk", 00:15:30.834 "block_size": 512, 00:15:30.834 "num_blocks": 65536, 00:15:30.834 "uuid": "3d8eec63-d12a-4700-ab67-5a10ca9da0de", 00:15:30.834 "assigned_rate_limits": { 00:15:30.834 "rw_ios_per_sec": 0, 00:15:30.834 "rw_mbytes_per_sec": 0, 00:15:30.834 "r_mbytes_per_sec": 0, 00:15:30.834 "w_mbytes_per_sec": 0 00:15:30.834 }, 00:15:30.834 "claimed": true, 00:15:30.834 "claim_type": "exclusive_write", 00:15:30.834 "zoned": false, 00:15:30.834 "supported_io_types": { 00:15:30.834 "read": true, 00:15:30.834 "write": true, 00:15:30.834 "unmap": true, 00:15:30.834 "flush": true, 00:15:30.834 "reset": true, 00:15:30.834 "nvme_admin": false, 00:15:30.834 "nvme_io": false, 00:15:30.834 "nvme_io_md": false, 00:15:30.834 "write_zeroes": true, 00:15:30.834 "zcopy": true, 00:15:30.834 "get_zone_info": false, 00:15:30.834 "zone_management": false, 00:15:30.834 "zone_append": false, 00:15:30.834 "compare": false, 00:15:30.834 "compare_and_write": false, 00:15:30.834 "abort": true, 00:15:30.834 "seek_hole": false, 00:15:30.834 "seek_data": false, 00:15:30.834 "copy": true, 00:15:30.834 "nvme_iov_md": false 00:15:30.834 }, 00:15:30.834 "memory_domains": [ 00:15:30.834 { 00:15:30.834 "dma_device_id": "system", 00:15:30.834 "dma_device_type": 1 00:15:30.834 }, 00:15:30.834 { 00:15:30.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.834 "dma_device_type": 2 00:15:30.834 } 00:15:30.834 ], 00:15:30.834 "driver_specific": {} 00:15:30.834 } 00:15:30.834 ] 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.834 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.835 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.835 10:57:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.092 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.092 "name": "Existed_Raid", 00:15:31.092 "uuid": "d82d4c49-a520-4e48-8c99-1a12724c407f", 00:15:31.092 "strip_size_kb": 64, 00:15:31.092 "state": "online", 00:15:31.092 "raid_level": "concat", 00:15:31.092 "superblock": true, 00:15:31.092 "num_base_bdevs": 2, 00:15:31.092 "num_base_bdevs_discovered": 2, 00:15:31.092 "num_base_bdevs_operational": 2, 00:15:31.092 "base_bdevs_list": [ 00:15:31.092 { 00:15:31.092 "name": "BaseBdev1", 00:15:31.092 "uuid": "33523a76-346f-42ce-bc28-ed867a58b7d2", 00:15:31.092 "is_configured": true, 00:15:31.092 "data_offset": 2048, 00:15:31.092 "data_size": 63488 00:15:31.092 }, 00:15:31.092 { 00:15:31.092 "name": "BaseBdev2", 00:15:31.092 "uuid": "3d8eec63-d12a-4700-ab67-5a10ca9da0de", 00:15:31.092 "is_configured": true, 00:15:31.092 "data_offset": 2048, 00:15:31.092 "data_size": 63488 00:15:31.092 } 00:15:31.092 ] 00:15:31.092 }' 00:15:31.092 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.092 10:57:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.659 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.659 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:31.659 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:31.659 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:31.659 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:31.659 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:31.659 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:31.659 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:31.918 [2024-07-25 10:57:38.871460] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.918 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:31.918 "name": "Existed_Raid", 00:15:31.918 "aliases": [ 00:15:31.918 "d82d4c49-a520-4e48-8c99-1a12724c407f" 00:15:31.918 ], 00:15:31.918 "product_name": "Raid Volume", 00:15:31.918 "block_size": 512, 00:15:31.918 "num_blocks": 126976, 00:15:31.918 "uuid": "d82d4c49-a520-4e48-8c99-1a12724c407f", 00:15:31.918 "assigned_rate_limits": { 00:15:31.918 "rw_ios_per_sec": 0, 00:15:31.918 "rw_mbytes_per_sec": 0, 00:15:31.918 "r_mbytes_per_sec": 0, 00:15:31.918 "w_mbytes_per_sec": 0 00:15:31.918 }, 00:15:31.918 "claimed": false, 00:15:31.918 "zoned": false, 00:15:31.918 "supported_io_types": { 00:15:31.918 "read": true, 00:15:31.918 "write": true, 00:15:31.918 "unmap": true, 00:15:31.918 "flush": true, 00:15:31.918 "reset": true, 00:15:31.918 "nvme_admin": false, 00:15:31.918 "nvme_io": false, 00:15:31.918 "nvme_io_md": false, 00:15:31.918 "write_zeroes": true, 00:15:31.918 "zcopy": false, 00:15:31.918 "get_zone_info": false, 00:15:31.918 "zone_management": false, 00:15:31.918 "zone_append": false, 00:15:31.918 "compare": false, 00:15:31.918 "compare_and_write": false, 00:15:31.918 "abort": false, 00:15:31.918 "seek_hole": false, 00:15:31.918 "seek_data": false, 00:15:31.918 "copy": false, 00:15:31.918 "nvme_iov_md": false 00:15:31.918 }, 00:15:31.918 "memory_domains": [ 00:15:31.918 { 00:15:31.918 "dma_device_id": "system", 00:15:31.918 "dma_device_type": 1 00:15:31.918 }, 00:15:31.918 { 00:15:31.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.918 "dma_device_type": 2 00:15:31.918 }, 00:15:31.918 { 00:15:31.918 "dma_device_id": "system", 00:15:31.918 "dma_device_type": 1 00:15:31.918 }, 00:15:31.918 { 00:15:31.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.918 "dma_device_type": 2 00:15:31.918 } 00:15:31.918 ], 00:15:31.918 "driver_specific": { 00:15:31.918 "raid": { 00:15:31.918 "uuid": "d82d4c49-a520-4e48-8c99-1a12724c407f", 00:15:31.918 "strip_size_kb": 64, 00:15:31.918 "state": "online", 00:15:31.918 "raid_level": "concat", 00:15:31.918 "superblock": true, 00:15:31.918 "num_base_bdevs": 2, 00:15:31.918 "num_base_bdevs_discovered": 2, 00:15:31.918 "num_base_bdevs_operational": 2, 00:15:31.918 "base_bdevs_list": [ 00:15:31.918 { 00:15:31.918 "name": "BaseBdev1", 00:15:31.918 "uuid": "33523a76-346f-42ce-bc28-ed867a58b7d2", 00:15:31.918 "is_configured": true, 00:15:31.918 "data_offset": 2048, 00:15:31.918 "data_size": 63488 00:15:31.918 }, 00:15:31.918 { 00:15:31.918 "name": "BaseBdev2", 00:15:31.918 "uuid": "3d8eec63-d12a-4700-ab67-5a10ca9da0de", 00:15:31.918 "is_configured": true, 00:15:31.918 "data_offset": 2048, 00:15:31.918 "data_size": 63488 00:15:31.918 } 00:15:31.918 ] 00:15:31.918 } 00:15:31.918 } 00:15:31.918 }' 00:15:31.918 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.918 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:31.918 BaseBdev2' 00:15:31.918 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:31.918 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:31.918 10:57:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.178 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.178 "name": "BaseBdev1", 00:15:32.178 "aliases": [ 00:15:32.178 "33523a76-346f-42ce-bc28-ed867a58b7d2" 00:15:32.178 ], 00:15:32.178 "product_name": "Malloc disk", 00:15:32.178 "block_size": 512, 00:15:32.178 "num_blocks": 65536, 00:15:32.178 "uuid": "33523a76-346f-42ce-bc28-ed867a58b7d2", 00:15:32.178 "assigned_rate_limits": { 00:15:32.178 "rw_ios_per_sec": 0, 00:15:32.178 "rw_mbytes_per_sec": 0, 00:15:32.178 "r_mbytes_per_sec": 0, 00:15:32.178 "w_mbytes_per_sec": 0 00:15:32.178 }, 00:15:32.178 "claimed": true, 00:15:32.178 "claim_type": "exclusive_write", 00:15:32.178 "zoned": false, 00:15:32.178 "supported_io_types": { 00:15:32.178 "read": true, 00:15:32.178 "write": true, 00:15:32.178 "unmap": true, 00:15:32.178 "flush": true, 00:15:32.178 "reset": true, 00:15:32.178 "nvme_admin": false, 00:15:32.178 "nvme_io": false, 00:15:32.178 "nvme_io_md": false, 00:15:32.178 "write_zeroes": true, 00:15:32.178 "zcopy": true, 00:15:32.178 "get_zone_info": false, 00:15:32.178 "zone_management": false, 00:15:32.178 "zone_append": false, 00:15:32.178 "compare": false, 00:15:32.178 "compare_and_write": false, 00:15:32.178 "abort": true, 00:15:32.178 "seek_hole": false, 00:15:32.178 "seek_data": false, 00:15:32.178 "copy": true, 00:15:32.178 "nvme_iov_md": false 00:15:32.178 }, 00:15:32.178 "memory_domains": [ 00:15:32.178 { 00:15:32.178 "dma_device_id": "system", 00:15:32.178 "dma_device_type": 1 00:15:32.178 }, 00:15:32.178 { 00:15:32.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.178 "dma_device_type": 2 00:15:32.178 } 00:15:32.178 ], 00:15:32.178 "driver_specific": {} 00:15:32.178 }' 00:15:32.178 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.178 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.178 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.178 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.437 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.437 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.437 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.437 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.437 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.437 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.437 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.437 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.437 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.437 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:32.437 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.695 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.695 "name": "BaseBdev2", 00:15:32.695 "aliases": [ 00:15:32.695 "3d8eec63-d12a-4700-ab67-5a10ca9da0de" 00:15:32.695 ], 00:15:32.695 "product_name": "Malloc disk", 00:15:32.695 "block_size": 512, 00:15:32.695 "num_blocks": 65536, 00:15:32.695 "uuid": "3d8eec63-d12a-4700-ab67-5a10ca9da0de", 00:15:32.695 "assigned_rate_limits": { 00:15:32.695 "rw_ios_per_sec": 0, 00:15:32.695 "rw_mbytes_per_sec": 0, 00:15:32.695 "r_mbytes_per_sec": 0, 00:15:32.695 "w_mbytes_per_sec": 0 00:15:32.695 }, 00:15:32.695 "claimed": true, 00:15:32.695 "claim_type": "exclusive_write", 00:15:32.695 "zoned": false, 00:15:32.695 "supported_io_types": { 00:15:32.695 "read": true, 00:15:32.695 "write": true, 00:15:32.695 "unmap": true, 00:15:32.695 "flush": true, 00:15:32.695 "reset": true, 00:15:32.695 "nvme_admin": false, 00:15:32.695 "nvme_io": false, 00:15:32.695 "nvme_io_md": false, 00:15:32.695 "write_zeroes": true, 00:15:32.695 "zcopy": true, 00:15:32.695 "get_zone_info": false, 00:15:32.695 "zone_management": false, 00:15:32.695 "zone_append": false, 00:15:32.695 "compare": false, 00:15:32.695 "compare_and_write": false, 00:15:32.695 "abort": true, 00:15:32.695 "seek_hole": false, 00:15:32.695 "seek_data": false, 00:15:32.695 "copy": true, 00:15:32.695 "nvme_iov_md": false 00:15:32.695 }, 00:15:32.695 "memory_domains": [ 00:15:32.695 { 00:15:32.695 "dma_device_id": "system", 00:15:32.695 "dma_device_type": 1 00:15:32.695 }, 00:15:32.695 { 00:15:32.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.695 "dma_device_type": 2 00:15:32.695 } 00:15:32.695 ], 00:15:32.695 "driver_specific": {} 00:15:32.695 }' 00:15:32.695 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.695 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.695 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.695 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.955 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.955 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.955 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.955 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.955 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.955 10:57:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.955 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.955 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.955 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:33.214 [2024-07-25 10:57:40.258955] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.214 [2024-07-25 10:57:40.258990] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.214 [2024-07-25 10:57:40.259050] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.214 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.473 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.473 "name": "Existed_Raid", 00:15:33.473 "uuid": "d82d4c49-a520-4e48-8c99-1a12724c407f", 00:15:33.473 "strip_size_kb": 64, 00:15:33.473 "state": "offline", 00:15:33.473 "raid_level": "concat", 00:15:33.473 "superblock": true, 00:15:33.473 "num_base_bdevs": 2, 00:15:33.473 "num_base_bdevs_discovered": 1, 00:15:33.473 "num_base_bdevs_operational": 1, 00:15:33.473 "base_bdevs_list": [ 00:15:33.473 { 00:15:33.473 "name": null, 00:15:33.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.473 "is_configured": false, 00:15:33.473 "data_offset": 2048, 00:15:33.473 "data_size": 63488 00:15:33.473 }, 00:15:33.473 { 00:15:33.473 "name": "BaseBdev2", 00:15:33.473 "uuid": "3d8eec63-d12a-4700-ab67-5a10ca9da0de", 00:15:33.473 "is_configured": true, 00:15:33.473 "data_offset": 2048, 00:15:33.473 "data_size": 63488 00:15:33.473 } 00:15:33.473 ] 00:15:33.473 }' 00:15:33.473 10:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.473 10:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.038 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:34.038 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:34.038 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.038 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:34.295 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:34.295 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.295 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:34.553 [2024-07-25 10:57:41.525537] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.553 [2024-07-25 10:57:41.525596] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3562472 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3562472 ']' 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3562472 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:34.810 10:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3562472 00:15:35.068 10:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.068 10:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.068 10:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3562472' 00:15:35.068 killing process with pid 3562472 00:15:35.068 10:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3562472 00:15:35.068 [2024-07-25 10:57:41.958395] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.068 10:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3562472 00:15:35.068 [2024-07-25 10:57:41.980055] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.971 10:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:36.972 00:15:36.972 real 0m11.789s 00:15:36.972 user 0m19.186s 00:15:36.972 sys 0m2.092s 00:15:36.972 10:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:36.972 10:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.972 ************************************ 00:15:36.972 END TEST raid_state_function_test_sb 00:15:36.972 ************************************ 00:15:36.972 10:57:43 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:36.972 10:57:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:36.972 10:57:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:36.972 10:57:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:36.972 ************************************ 00:15:36.972 START TEST raid_superblock_test 00:15:36.972 ************************************ 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3564630 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3564630 /var/tmp/spdk-raid.sock 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3564630 ']' 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:36.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:36.972 10:57:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.972 [2024-07-25 10:57:43.794647] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:36.972 [2024-07-25 10:57:43.794765] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3564630 ] 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:36.972 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:36.972 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:36.972 [2024-07-25 10:57:44.021535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.230 [2024-07-25 10:57:44.309814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.795 [2024-07-25 10:57:44.657037] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.795 [2024-07-25 10:57:44.657068] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.795 10:57:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.795 10:57:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:15:37.795 10:57:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:15:37.795 10:57:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:37.795 10:57:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:15:37.795 10:57:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:15:37.795 10:57:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:37.795 10:57:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:37.795 10:57:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:37.795 10:57:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:37.795 10:57:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:38.053 malloc1 00:15:38.053 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:38.309 [2024-07-25 10:57:45.336205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:38.309 [2024-07-25 10:57:45.336267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.309 [2024-07-25 10:57:45.336298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:15:38.309 [2024-07-25 10:57:45.336315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.309 [2024-07-25 10:57:45.339066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.309 [2024-07-25 10:57:45.339099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:38.309 pt1 00:15:38.309 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:38.309 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:38.309 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:15:38.309 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:15:38.309 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:38.309 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:38.309 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:15:38.309 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:38.309 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:38.566 malloc2 00:15:38.566 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:38.824 [2024-07-25 10:57:45.841977] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:38.824 [2024-07-25 10:57:45.842034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.824 [2024-07-25 10:57:45.842062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:15:38.824 [2024-07-25 10:57:45.842078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.824 [2024-07-25 10:57:45.844845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.824 [2024-07-25 10:57:45.844884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:38.824 pt2 00:15:38.824 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:15:38.824 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:15:38.824 10:57:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:39.081 [2024-07-25 10:57:46.070619] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:39.082 [2024-07-25 10:57:46.072927] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:39.082 [2024-07-25 10:57:46.073110] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:39.082 [2024-07-25 10:57:46.073127] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:39.082 [2024-07-25 10:57:46.073467] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:39.082 [2024-07-25 10:57:46.073689] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:39.082 [2024-07-25 10:57:46.073711] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:39.082 [2024-07-25 10:57:46.073910] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.082 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:39.082 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:39.082 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:39.082 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:39.082 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:39.082 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:39.082 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.082 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.082 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.082 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.082 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.082 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.339 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.339 "name": "raid_bdev1", 00:15:39.339 "uuid": "9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31", 00:15:39.339 "strip_size_kb": 64, 00:15:39.339 "state": "online", 00:15:39.339 "raid_level": "concat", 00:15:39.339 "superblock": true, 00:15:39.339 "num_base_bdevs": 2, 00:15:39.339 "num_base_bdevs_discovered": 2, 00:15:39.339 "num_base_bdevs_operational": 2, 00:15:39.339 "base_bdevs_list": [ 00:15:39.339 { 00:15:39.339 "name": "pt1", 00:15:39.339 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:39.339 "is_configured": true, 00:15:39.339 "data_offset": 2048, 00:15:39.339 "data_size": 63488 00:15:39.339 }, 00:15:39.339 { 00:15:39.339 "name": "pt2", 00:15:39.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.339 "is_configured": true, 00:15:39.339 "data_offset": 2048, 00:15:39.339 "data_size": 63488 00:15:39.339 } 00:15:39.339 ] 00:15:39.339 }' 00:15:39.339 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.339 10:57:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.904 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:15:39.904 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:39.904 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:39.904 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:39.904 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:39.904 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:39.904 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:39.904 10:57:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:40.162 [2024-07-25 10:57:47.097756] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:40.162 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:40.162 "name": "raid_bdev1", 00:15:40.162 "aliases": [ 00:15:40.162 "9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31" 00:15:40.162 ], 00:15:40.162 "product_name": "Raid Volume", 00:15:40.162 "block_size": 512, 00:15:40.162 "num_blocks": 126976, 00:15:40.162 "uuid": "9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31", 00:15:40.162 "assigned_rate_limits": { 00:15:40.162 "rw_ios_per_sec": 0, 00:15:40.162 "rw_mbytes_per_sec": 0, 00:15:40.162 "r_mbytes_per_sec": 0, 00:15:40.162 "w_mbytes_per_sec": 0 00:15:40.162 }, 00:15:40.162 "claimed": false, 00:15:40.162 "zoned": false, 00:15:40.162 "supported_io_types": { 00:15:40.162 "read": true, 00:15:40.162 "write": true, 00:15:40.162 "unmap": true, 00:15:40.162 "flush": true, 00:15:40.162 "reset": true, 00:15:40.162 "nvme_admin": false, 00:15:40.162 "nvme_io": false, 00:15:40.162 "nvme_io_md": false, 00:15:40.162 "write_zeroes": true, 00:15:40.162 "zcopy": false, 00:15:40.162 "get_zone_info": false, 00:15:40.162 "zone_management": false, 00:15:40.162 "zone_append": false, 00:15:40.162 "compare": false, 00:15:40.162 "compare_and_write": false, 00:15:40.162 "abort": false, 00:15:40.162 "seek_hole": false, 00:15:40.162 "seek_data": false, 00:15:40.162 "copy": false, 00:15:40.162 "nvme_iov_md": false 00:15:40.162 }, 00:15:40.162 "memory_domains": [ 00:15:40.162 { 00:15:40.162 "dma_device_id": "system", 00:15:40.162 "dma_device_type": 1 00:15:40.162 }, 00:15:40.162 { 00:15:40.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.162 "dma_device_type": 2 00:15:40.162 }, 00:15:40.162 { 00:15:40.162 "dma_device_id": "system", 00:15:40.162 "dma_device_type": 1 00:15:40.162 }, 00:15:40.162 { 00:15:40.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.162 "dma_device_type": 2 00:15:40.162 } 00:15:40.162 ], 00:15:40.162 "driver_specific": { 00:15:40.162 "raid": { 00:15:40.162 "uuid": "9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31", 00:15:40.162 "strip_size_kb": 64, 00:15:40.162 "state": "online", 00:15:40.162 "raid_level": "concat", 00:15:40.162 "superblock": true, 00:15:40.162 "num_base_bdevs": 2, 00:15:40.162 "num_base_bdevs_discovered": 2, 00:15:40.162 "num_base_bdevs_operational": 2, 00:15:40.162 "base_bdevs_list": [ 00:15:40.162 { 00:15:40.162 "name": "pt1", 00:15:40.162 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:40.162 "is_configured": true, 00:15:40.162 "data_offset": 2048, 00:15:40.162 "data_size": 63488 00:15:40.162 }, 00:15:40.162 { 00:15:40.162 "name": "pt2", 00:15:40.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.162 "is_configured": true, 00:15:40.162 "data_offset": 2048, 00:15:40.162 "data_size": 63488 00:15:40.162 } 00:15:40.162 ] 00:15:40.162 } 00:15:40.162 } 00:15:40.162 }' 00:15:40.162 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:40.162 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:40.162 pt2' 00:15:40.162 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.162 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:40.162 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.419 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.419 "name": "pt1", 00:15:40.419 "aliases": [ 00:15:40.419 "00000000-0000-0000-0000-000000000001" 00:15:40.419 ], 00:15:40.419 "product_name": "passthru", 00:15:40.419 "block_size": 512, 00:15:40.419 "num_blocks": 65536, 00:15:40.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:40.419 "assigned_rate_limits": { 00:15:40.419 "rw_ios_per_sec": 0, 00:15:40.419 "rw_mbytes_per_sec": 0, 00:15:40.419 "r_mbytes_per_sec": 0, 00:15:40.419 "w_mbytes_per_sec": 0 00:15:40.419 }, 00:15:40.419 "claimed": true, 00:15:40.419 "claim_type": "exclusive_write", 00:15:40.419 "zoned": false, 00:15:40.419 "supported_io_types": { 00:15:40.419 "read": true, 00:15:40.419 "write": true, 00:15:40.419 "unmap": true, 00:15:40.419 "flush": true, 00:15:40.419 "reset": true, 00:15:40.419 "nvme_admin": false, 00:15:40.419 "nvme_io": false, 00:15:40.419 "nvme_io_md": false, 00:15:40.419 "write_zeroes": true, 00:15:40.419 "zcopy": true, 00:15:40.419 "get_zone_info": false, 00:15:40.419 "zone_management": false, 00:15:40.419 "zone_append": false, 00:15:40.419 "compare": false, 00:15:40.419 "compare_and_write": false, 00:15:40.419 "abort": true, 00:15:40.419 "seek_hole": false, 00:15:40.419 "seek_data": false, 00:15:40.419 "copy": true, 00:15:40.419 "nvme_iov_md": false 00:15:40.419 }, 00:15:40.419 "memory_domains": [ 00:15:40.419 { 00:15:40.419 "dma_device_id": "system", 00:15:40.419 "dma_device_type": 1 00:15:40.419 }, 00:15:40.419 { 00:15:40.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.419 "dma_device_type": 2 00:15:40.419 } 00:15:40.419 ], 00:15:40.420 "driver_specific": { 00:15:40.420 "passthru": { 00:15:40.420 "name": "pt1", 00:15:40.420 "base_bdev_name": "malloc1" 00:15:40.420 } 00:15:40.420 } 00:15:40.420 }' 00:15:40.420 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.420 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.420 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.420 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.420 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.677 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:40.677 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.677 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.677 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.677 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.677 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.677 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.677 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.677 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.677 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:40.934 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.934 "name": "pt2", 00:15:40.934 "aliases": [ 00:15:40.934 "00000000-0000-0000-0000-000000000002" 00:15:40.934 ], 00:15:40.934 "product_name": "passthru", 00:15:40.934 "block_size": 512, 00:15:40.934 "num_blocks": 65536, 00:15:40.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.934 "assigned_rate_limits": { 00:15:40.934 "rw_ios_per_sec": 0, 00:15:40.934 "rw_mbytes_per_sec": 0, 00:15:40.934 "r_mbytes_per_sec": 0, 00:15:40.934 "w_mbytes_per_sec": 0 00:15:40.934 }, 00:15:40.934 "claimed": true, 00:15:40.934 "claim_type": "exclusive_write", 00:15:40.934 "zoned": false, 00:15:40.934 "supported_io_types": { 00:15:40.934 "read": true, 00:15:40.934 "write": true, 00:15:40.934 "unmap": true, 00:15:40.934 "flush": true, 00:15:40.934 "reset": true, 00:15:40.934 "nvme_admin": false, 00:15:40.934 "nvme_io": false, 00:15:40.934 "nvme_io_md": false, 00:15:40.934 "write_zeroes": true, 00:15:40.934 "zcopy": true, 00:15:40.934 "get_zone_info": false, 00:15:40.934 "zone_management": false, 00:15:40.934 "zone_append": false, 00:15:40.934 "compare": false, 00:15:40.934 "compare_and_write": false, 00:15:40.934 "abort": true, 00:15:40.934 "seek_hole": false, 00:15:40.934 "seek_data": false, 00:15:40.934 "copy": true, 00:15:40.934 "nvme_iov_md": false 00:15:40.935 }, 00:15:40.935 "memory_domains": [ 00:15:40.935 { 00:15:40.935 "dma_device_id": "system", 00:15:40.935 "dma_device_type": 1 00:15:40.935 }, 00:15:40.935 { 00:15:40.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.935 "dma_device_type": 2 00:15:40.935 } 00:15:40.935 ], 00:15:40.935 "driver_specific": { 00:15:40.935 "passthru": { 00:15:40.935 "name": "pt2", 00:15:40.935 "base_bdev_name": "malloc2" 00:15:40.935 } 00:15:40.935 } 00:15:40.935 }' 00:15:40.935 10:57:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.935 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.935 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.935 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.192 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.192 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.192 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.192 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.192 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.192 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.192 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.192 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.450 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:41.450 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:15:41.450 [2024-07-25 10:57:48.517600] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.450 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31 00:15:41.450 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31 ']' 00:15:41.450 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:41.708 [2024-07-25 10:57:48.745905] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:41.708 [2024-07-25 10:57:48.745936] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.708 [2024-07-25 10:57:48.746024] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.708 [2024-07-25 10:57:48.746081] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.708 [2024-07-25 10:57:48.746103] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:41.708 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.708 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:15:41.966 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:15:41.966 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:15:41.966 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:41.966 10:57:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:42.223 10:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:15:42.223 10:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:42.499 10:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:42.499 10:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:15:42.758 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:43.016 [2024-07-25 10:57:49.896976] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:43.016 [2024-07-25 10:57:49.899299] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:43.016 [2024-07-25 10:57:49.899373] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:43.016 [2024-07-25 10:57:49.899429] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:43.016 [2024-07-25 10:57:49.899452] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.016 [2024-07-25 10:57:49.899469] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:43.016 request: 00:15:43.016 { 00:15:43.016 "name": "raid_bdev1", 00:15:43.016 "raid_level": "concat", 00:15:43.016 "base_bdevs": [ 00:15:43.016 "malloc1", 00:15:43.016 "malloc2" 00:15:43.016 ], 00:15:43.016 "strip_size_kb": 64, 00:15:43.016 "superblock": false, 00:15:43.016 "method": "bdev_raid_create", 00:15:43.016 "req_id": 1 00:15:43.016 } 00:15:43.016 Got JSON-RPC error response 00:15:43.016 response: 00:15:43.016 { 00:15:43.016 "code": -17, 00:15:43.016 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:43.016 } 00:15:43.016 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:43.016 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:43.016 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:43.016 10:57:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:43.016 10:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.016 10:57:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:43.275 [2024-07-25 10:57:50.354125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:43.275 [2024-07-25 10:57:50.354204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.275 [2024-07-25 10:57:50.354232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:15:43.275 [2024-07-25 10:57:50.354249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.275 [2024-07-25 10:57:50.357043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.275 [2024-07-25 10:57:50.357085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:43.275 [2024-07-25 10:57:50.357189] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:43.275 [2024-07-25 10:57:50.357278] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:43.275 pt1 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.275 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.533 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.533 "name": "raid_bdev1", 00:15:43.533 "uuid": "9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31", 00:15:43.533 "strip_size_kb": 64, 00:15:43.533 "state": "configuring", 00:15:43.533 "raid_level": "concat", 00:15:43.533 "superblock": true, 00:15:43.533 "num_base_bdevs": 2, 00:15:43.533 "num_base_bdevs_discovered": 1, 00:15:43.533 "num_base_bdevs_operational": 2, 00:15:43.533 "base_bdevs_list": [ 00:15:43.533 { 00:15:43.533 "name": "pt1", 00:15:43.533 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.533 "is_configured": true, 00:15:43.533 "data_offset": 2048, 00:15:43.533 "data_size": 63488 00:15:43.533 }, 00:15:43.533 { 00:15:43.533 "name": null, 00:15:43.533 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.533 "is_configured": false, 00:15:43.533 "data_offset": 2048, 00:15:43.533 "data_size": 63488 00:15:43.533 } 00:15:43.533 ] 00:15:43.533 }' 00:15:43.533 10:57:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.533 10:57:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.099 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:15:44.099 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:15:44.099 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:44.099 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:44.357 [2024-07-25 10:57:51.373004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:44.357 [2024-07-25 10:57:51.373071] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.357 [2024-07-25 10:57:51.373095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:15:44.357 [2024-07-25 10:57:51.373113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.357 [2024-07-25 10:57:51.373700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.357 [2024-07-25 10:57:51.373730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:44.357 [2024-07-25 10:57:51.373825] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:44.357 [2024-07-25 10:57:51.373861] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:44.357 [2024-07-25 10:57:51.374030] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:44.357 [2024-07-25 10:57:51.374049] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:44.357 [2024-07-25 10:57:51.374354] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:15:44.357 [2024-07-25 10:57:51.374559] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:44.357 [2024-07-25 10:57:51.374574] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:44.357 [2024-07-25 10:57:51.374766] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.357 pt2 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.357 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.615 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.615 "name": "raid_bdev1", 00:15:44.615 "uuid": "9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31", 00:15:44.615 "strip_size_kb": 64, 00:15:44.615 "state": "online", 00:15:44.615 "raid_level": "concat", 00:15:44.615 "superblock": true, 00:15:44.615 "num_base_bdevs": 2, 00:15:44.615 "num_base_bdevs_discovered": 2, 00:15:44.615 "num_base_bdevs_operational": 2, 00:15:44.615 "base_bdevs_list": [ 00:15:44.615 { 00:15:44.615 "name": "pt1", 00:15:44.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.615 "is_configured": true, 00:15:44.615 "data_offset": 2048, 00:15:44.615 "data_size": 63488 00:15:44.615 }, 00:15:44.615 { 00:15:44.615 "name": "pt2", 00:15:44.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.615 "is_configured": true, 00:15:44.615 "data_offset": 2048, 00:15:44.615 "data_size": 63488 00:15:44.615 } 00:15:44.615 ] 00:15:44.615 }' 00:15:44.615 10:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.615 10:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.182 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:15:45.182 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:45.182 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:45.182 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:45.182 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:45.182 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:45.182 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:45.182 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:45.477 [2024-07-25 10:57:52.388081] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.477 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:45.477 "name": "raid_bdev1", 00:15:45.477 "aliases": [ 00:15:45.477 "9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31" 00:15:45.477 ], 00:15:45.477 "product_name": "Raid Volume", 00:15:45.477 "block_size": 512, 00:15:45.477 "num_blocks": 126976, 00:15:45.477 "uuid": "9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31", 00:15:45.477 "assigned_rate_limits": { 00:15:45.477 "rw_ios_per_sec": 0, 00:15:45.477 "rw_mbytes_per_sec": 0, 00:15:45.477 "r_mbytes_per_sec": 0, 00:15:45.477 "w_mbytes_per_sec": 0 00:15:45.477 }, 00:15:45.477 "claimed": false, 00:15:45.477 "zoned": false, 00:15:45.477 "supported_io_types": { 00:15:45.477 "read": true, 00:15:45.477 "write": true, 00:15:45.477 "unmap": true, 00:15:45.477 "flush": true, 00:15:45.477 "reset": true, 00:15:45.477 "nvme_admin": false, 00:15:45.477 "nvme_io": false, 00:15:45.477 "nvme_io_md": false, 00:15:45.477 "write_zeroes": true, 00:15:45.477 "zcopy": false, 00:15:45.477 "get_zone_info": false, 00:15:45.477 "zone_management": false, 00:15:45.477 "zone_append": false, 00:15:45.477 "compare": false, 00:15:45.477 "compare_and_write": false, 00:15:45.477 "abort": false, 00:15:45.477 "seek_hole": false, 00:15:45.477 "seek_data": false, 00:15:45.477 "copy": false, 00:15:45.477 "nvme_iov_md": false 00:15:45.477 }, 00:15:45.477 "memory_domains": [ 00:15:45.477 { 00:15:45.477 "dma_device_id": "system", 00:15:45.477 "dma_device_type": 1 00:15:45.477 }, 00:15:45.477 { 00:15:45.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.477 "dma_device_type": 2 00:15:45.477 }, 00:15:45.477 { 00:15:45.477 "dma_device_id": "system", 00:15:45.477 "dma_device_type": 1 00:15:45.477 }, 00:15:45.477 { 00:15:45.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.477 "dma_device_type": 2 00:15:45.477 } 00:15:45.477 ], 00:15:45.477 "driver_specific": { 00:15:45.477 "raid": { 00:15:45.477 "uuid": "9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31", 00:15:45.477 "strip_size_kb": 64, 00:15:45.477 "state": "online", 00:15:45.477 "raid_level": "concat", 00:15:45.477 "superblock": true, 00:15:45.477 "num_base_bdevs": 2, 00:15:45.477 "num_base_bdevs_discovered": 2, 00:15:45.477 "num_base_bdevs_operational": 2, 00:15:45.477 "base_bdevs_list": [ 00:15:45.477 { 00:15:45.477 "name": "pt1", 00:15:45.477 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.477 "is_configured": true, 00:15:45.477 "data_offset": 2048, 00:15:45.477 "data_size": 63488 00:15:45.477 }, 00:15:45.477 { 00:15:45.477 "name": "pt2", 00:15:45.477 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.477 "is_configured": true, 00:15:45.477 "data_offset": 2048, 00:15:45.477 "data_size": 63488 00:15:45.477 } 00:15:45.477 ] 00:15:45.477 } 00:15:45.477 } 00:15:45.477 }' 00:15:45.477 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.477 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:45.477 pt2' 00:15:45.477 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:45.477 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:45.477 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:45.736 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:45.736 "name": "pt1", 00:15:45.736 "aliases": [ 00:15:45.736 "00000000-0000-0000-0000-000000000001" 00:15:45.736 ], 00:15:45.736 "product_name": "passthru", 00:15:45.736 "block_size": 512, 00:15:45.736 "num_blocks": 65536, 00:15:45.736 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.736 "assigned_rate_limits": { 00:15:45.736 "rw_ios_per_sec": 0, 00:15:45.736 "rw_mbytes_per_sec": 0, 00:15:45.736 "r_mbytes_per_sec": 0, 00:15:45.736 "w_mbytes_per_sec": 0 00:15:45.736 }, 00:15:45.736 "claimed": true, 00:15:45.736 "claim_type": "exclusive_write", 00:15:45.736 "zoned": false, 00:15:45.736 "supported_io_types": { 00:15:45.736 "read": true, 00:15:45.736 "write": true, 00:15:45.736 "unmap": true, 00:15:45.736 "flush": true, 00:15:45.736 "reset": true, 00:15:45.736 "nvme_admin": false, 00:15:45.736 "nvme_io": false, 00:15:45.736 "nvme_io_md": false, 00:15:45.736 "write_zeroes": true, 00:15:45.736 "zcopy": true, 00:15:45.736 "get_zone_info": false, 00:15:45.736 "zone_management": false, 00:15:45.736 "zone_append": false, 00:15:45.736 "compare": false, 00:15:45.736 "compare_and_write": false, 00:15:45.736 "abort": true, 00:15:45.736 "seek_hole": false, 00:15:45.736 "seek_data": false, 00:15:45.736 "copy": true, 00:15:45.736 "nvme_iov_md": false 00:15:45.736 }, 00:15:45.736 "memory_domains": [ 00:15:45.736 { 00:15:45.736 "dma_device_id": "system", 00:15:45.736 "dma_device_type": 1 00:15:45.736 }, 00:15:45.736 { 00:15:45.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.736 "dma_device_type": 2 00:15:45.736 } 00:15:45.736 ], 00:15:45.736 "driver_specific": { 00:15:45.736 "passthru": { 00:15:45.736 "name": "pt1", 00:15:45.736 "base_bdev_name": "malloc1" 00:15:45.736 } 00:15:45.736 } 00:15:45.736 }' 00:15:45.736 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.736 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:45.736 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:45.736 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.736 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:45.995 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:45.995 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:45.995 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:45.995 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:45.995 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:45.995 10:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:45.995 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:45.995 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:45.995 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:45.995 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:46.253 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:46.253 "name": "pt2", 00:15:46.253 "aliases": [ 00:15:46.253 "00000000-0000-0000-0000-000000000002" 00:15:46.253 ], 00:15:46.253 "product_name": "passthru", 00:15:46.253 "block_size": 512, 00:15:46.253 "num_blocks": 65536, 00:15:46.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.253 "assigned_rate_limits": { 00:15:46.253 "rw_ios_per_sec": 0, 00:15:46.253 "rw_mbytes_per_sec": 0, 00:15:46.253 "r_mbytes_per_sec": 0, 00:15:46.253 "w_mbytes_per_sec": 0 00:15:46.253 }, 00:15:46.253 "claimed": true, 00:15:46.253 "claim_type": "exclusive_write", 00:15:46.253 "zoned": false, 00:15:46.253 "supported_io_types": { 00:15:46.253 "read": true, 00:15:46.253 "write": true, 00:15:46.253 "unmap": true, 00:15:46.253 "flush": true, 00:15:46.253 "reset": true, 00:15:46.253 "nvme_admin": false, 00:15:46.253 "nvme_io": false, 00:15:46.253 "nvme_io_md": false, 00:15:46.253 "write_zeroes": true, 00:15:46.253 "zcopy": true, 00:15:46.253 "get_zone_info": false, 00:15:46.253 "zone_management": false, 00:15:46.253 "zone_append": false, 00:15:46.253 "compare": false, 00:15:46.253 "compare_and_write": false, 00:15:46.253 "abort": true, 00:15:46.253 "seek_hole": false, 00:15:46.253 "seek_data": false, 00:15:46.253 "copy": true, 00:15:46.253 "nvme_iov_md": false 00:15:46.253 }, 00:15:46.253 "memory_domains": [ 00:15:46.253 { 00:15:46.253 "dma_device_id": "system", 00:15:46.253 "dma_device_type": 1 00:15:46.253 }, 00:15:46.253 { 00:15:46.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.253 "dma_device_type": 2 00:15:46.253 } 00:15:46.253 ], 00:15:46.253 "driver_specific": { 00:15:46.253 "passthru": { 00:15:46.253 "name": "pt2", 00:15:46.253 "base_bdev_name": "malloc2" 00:15:46.253 } 00:15:46.253 } 00:15:46.253 }' 00:15:46.253 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.253 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:46.253 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:46.253 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.511 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:46.511 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:46.511 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.511 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:46.511 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:46.511 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.511 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:46.511 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:46.511 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:46.511 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:15:46.768 [2024-07-25 10:57:53.815946] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.768 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31 '!=' 9f1d3dfe-10d5-47ea-9e2b-fee1e56f5c31 ']' 00:15:46.768 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:15:46.768 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:46.768 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:46.768 10:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3564630 00:15:46.768 10:57:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3564630 ']' 00:15:46.768 10:57:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3564630 00:15:46.768 10:57:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:15:46.768 10:57:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:46.768 10:57:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3564630 00:15:47.026 10:57:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:47.026 10:57:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:47.026 10:57:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3564630' 00:15:47.026 killing process with pid 3564630 00:15:47.026 10:57:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3564630 00:15:47.026 [2024-07-25 10:57:53.894708] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.026 [2024-07-25 10:57:53.894811] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.026 [2024-07-25 10:57:53.894872] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.026 10:57:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3564630 00:15:47.026 [2024-07-25 10:57:53.894891] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:47.026 [2024-07-25 10:57:54.084920] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.927 10:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:15:48.927 00:15:48.927 real 0m12.076s 00:15:48.927 user 0m19.860s 00:15:48.927 sys 0m2.031s 00:15:48.927 10:57:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:48.927 10:57:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.927 ************************************ 00:15:48.927 END TEST raid_superblock_test 00:15:48.927 ************************************ 00:15:48.927 10:57:55 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:15:48.927 10:57:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:48.927 10:57:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:48.927 10:57:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.927 ************************************ 00:15:48.927 START TEST raid_read_error_test 00:15:48.927 ************************************ 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.JApHB9XAzm 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3566883 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3566883 /var/tmp/spdk-raid.sock 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3566883 ']' 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:48.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:48.927 10:57:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.927 [2024-07-25 10:57:55.973242] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:48.927 [2024-07-25 10:57:55.973364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3566883 ] 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:49.185 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:49.185 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:49.185 [2024-07-25 10:57:56.190683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.443 [2024-07-25 10:57:56.476078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.701 [2024-07-25 10:57:56.803796] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.701 [2024-07-25 10:57:56.803837] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.959 10:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.959 10:57:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:49.959 10:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:49.959 10:57:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:50.217 BaseBdev1_malloc 00:15:50.217 10:57:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:50.475 true 00:15:50.475 10:57:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:50.732 [2024-07-25 10:57:57.682568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:50.732 [2024-07-25 10:57:57.682628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.732 [2024-07-25 10:57:57.682655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:15:50.732 [2024-07-25 10:57:57.682683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.732 [2024-07-25 10:57:57.685478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.732 [2024-07-25 10:57:57.685515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:50.732 BaseBdev1 00:15:50.732 10:57:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:50.732 10:57:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:50.990 BaseBdev2_malloc 00:15:50.990 10:57:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:51.248 true 00:15:51.248 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:51.506 [2024-07-25 10:57:58.404664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:51.506 [2024-07-25 10:57:58.404721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.506 [2024-07-25 10:57:58.404746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:15:51.506 [2024-07-25 10:57:58.404767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.506 [2024-07-25 10:57:58.407526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.506 [2024-07-25 10:57:58.407564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:51.506 BaseBdev2 00:15:51.506 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:51.506 [2024-07-25 10:57:58.621320] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.506 [2024-07-25 10:57:58.623639] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.506 [2024-07-25 10:57:58.623851] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:51.506 [2024-07-25 10:57:58.623873] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:51.506 [2024-07-25 10:57:58.624221] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:51.506 [2024-07-25 10:57:58.624464] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:51.506 [2024-07-25 10:57:58.624480] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:51.506 [2024-07-25 10:57:58.624714] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.765 "name": "raid_bdev1", 00:15:51.765 "uuid": "3d6583c9-83bf-464c-9a8a-714d0d922d2d", 00:15:51.765 "strip_size_kb": 64, 00:15:51.765 "state": "online", 00:15:51.765 "raid_level": "concat", 00:15:51.765 "superblock": true, 00:15:51.765 "num_base_bdevs": 2, 00:15:51.765 "num_base_bdevs_discovered": 2, 00:15:51.765 "num_base_bdevs_operational": 2, 00:15:51.765 "base_bdevs_list": [ 00:15:51.765 { 00:15:51.765 "name": "BaseBdev1", 00:15:51.765 "uuid": "e5266d8e-36e9-5372-9875-93dc812dcb59", 00:15:51.765 "is_configured": true, 00:15:51.765 "data_offset": 2048, 00:15:51.765 "data_size": 63488 00:15:51.765 }, 00:15:51.765 { 00:15:51.765 "name": "BaseBdev2", 00:15:51.765 "uuid": "51f4f316-6ae9-5d57-9753-fabed9f39775", 00:15:51.765 "is_configured": true, 00:15:51.765 "data_offset": 2048, 00:15:51.765 "data_size": 63488 00:15:51.765 } 00:15:51.765 ] 00:15:51.765 }' 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.765 10:57:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.330 10:57:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:15:52.330 10:57:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:52.588 [2024-07-25 10:57:59.521483] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:15:53.522 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.780 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:53.780 "name": "raid_bdev1", 00:15:53.780 "uuid": "3d6583c9-83bf-464c-9a8a-714d0d922d2d", 00:15:53.780 "strip_size_kb": 64, 00:15:53.780 "state": "online", 00:15:53.780 "raid_level": "concat", 00:15:53.780 "superblock": true, 00:15:53.780 "num_base_bdevs": 2, 00:15:53.780 "num_base_bdevs_discovered": 2, 00:15:53.780 "num_base_bdevs_operational": 2, 00:15:53.780 "base_bdevs_list": [ 00:15:53.780 { 00:15:53.781 "name": "BaseBdev1", 00:15:53.781 "uuid": "e5266d8e-36e9-5372-9875-93dc812dcb59", 00:15:53.781 "is_configured": true, 00:15:53.781 "data_offset": 2048, 00:15:53.781 "data_size": 63488 00:15:53.781 }, 00:15:53.781 { 00:15:53.781 "name": "BaseBdev2", 00:15:53.781 "uuid": "51f4f316-6ae9-5d57-9753-fabed9f39775", 00:15:53.781 "is_configured": true, 00:15:53.781 "data_offset": 2048, 00:15:53.781 "data_size": 63488 00:15:53.781 } 00:15:53.781 ] 00:15:53.781 }' 00:15:53.781 10:58:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:53.781 10:58:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.346 10:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:54.605 [2024-07-25 10:58:01.672316] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.605 [2024-07-25 10:58:01.672364] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.605 [2024-07-25 10:58:01.675619] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.605 [2024-07-25 10:58:01.675676] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.605 [2024-07-25 10:58:01.675715] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.605 [2024-07-25 10:58:01.675739] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:54.605 0 00:15:54.605 10:58:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3566883 00:15:54.605 10:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3566883 ']' 00:15:54.605 10:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3566883 00:15:54.605 10:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:15:54.605 10:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:54.605 10:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3566883 00:15:54.863 10:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:54.863 10:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:54.863 10:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3566883' 00:15:54.863 killing process with pid 3566883 00:15:54.863 10:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3566883 00:15:54.863 [2024-07-25 10:58:01.751949] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.863 10:58:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3566883 00:15:54.863 [2024-07-25 10:58:01.851944] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.760 10:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.JApHB9XAzm 00:15:56.760 10:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:15:56.760 10:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:15:56.760 10:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:15:56.760 10:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:15:56.760 10:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:56.760 10:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:56.760 10:58:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:15:56.760 00:15:56.760 real 0m7.809s 00:15:56.760 user 0m10.940s 00:15:56.760 sys 0m1.159s 00:15:56.760 10:58:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.760 10:58:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.760 ************************************ 00:15:56.760 END TEST raid_read_error_test 00:15:56.760 ************************************ 00:15:56.760 10:58:03 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:15:56.760 10:58:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:56.760 10:58:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.760 10:58:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:56.760 ************************************ 00:15:56.760 START TEST raid_write_error_test 00:15:56.760 ************************************ 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.xLIg0GM066 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3568305 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3568305 /var/tmp/spdk-raid.sock 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3568305 ']' 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:56.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:56.760 10:58:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.760 [2024-07-25 10:58:03.846825] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:56.760 [2024-07-25 10:58:03.846919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3568305 ] 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:01.0 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:01.1 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:01.2 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:01.3 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:01.4 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:01.5 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:01.6 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:01.7 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:02.0 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:02.1 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:02.2 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:02.3 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:02.4 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:02.5 cannot be used 00:15:57.019 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.019 EAL: Requested device 0000:3d:02.6 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3d:02.7 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:01.0 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:01.1 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:01.2 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:01.3 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:01.4 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:01.5 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:01.6 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:01.7 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:02.0 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:02.1 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:02.2 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:02.3 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:02.4 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:02.5 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:02.6 cannot be used 00:15:57.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:15:57.020 EAL: Requested device 0000:3f:02.7 cannot be used 00:15:57.020 [2024-07-25 10:58:04.074973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.278 [2024-07-25 10:58:04.345899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.846 [2024-07-25 10:58:04.676240] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.846 [2024-07-25 10:58:04.676277] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.846 10:58:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:57.846 10:58:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:15:57.846 10:58:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:57.846 10:58:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:58.105 BaseBdev1_malloc 00:15:58.105 10:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:58.363 true 00:15:58.363 10:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:58.622 [2024-07-25 10:58:05.572729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:58.622 [2024-07-25 10:58:05.572793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.622 [2024-07-25 10:58:05.572821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:15:58.622 [2024-07-25 10:58:05.572842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.622 [2024-07-25 10:58:05.575577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.622 [2024-07-25 10:58:05.575615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:58.622 BaseBdev1 00:15:58.622 10:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:15:58.622 10:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:58.881 BaseBdev2_malloc 00:15:58.881 10:58:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:59.139 true 00:15:59.139 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:59.398 [2024-07-25 10:58:06.310725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:59.398 [2024-07-25 10:58:06.310792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.398 [2024-07-25 10:58:06.310817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:15:59.398 [2024-07-25 10:58:06.310838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.398 [2024-07-25 10:58:06.313607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.398 [2024-07-25 10:58:06.313645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.398 BaseBdev2 00:15:59.398 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:59.656 [2024-07-25 10:58:06.531381] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.656 [2024-07-25 10:58:06.533771] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.656 [2024-07-25 10:58:06.534006] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:59.656 [2024-07-25 10:58:06.534028] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:59.656 [2024-07-25 10:58:06.534382] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:15:59.656 [2024-07-25 10:58:06.534634] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:59.656 [2024-07-25 10:58:06.534650] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:59.656 [2024-07-25 10:58:06.534894] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.656 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:59.656 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:59.656 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:59.656 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:59.656 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:59.656 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:59.656 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.656 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.656 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.656 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.656 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.656 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.915 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.915 "name": "raid_bdev1", 00:15:59.915 "uuid": "c2220bb9-dfd8-4177-a628-8517feed2995", 00:15:59.915 "strip_size_kb": 64, 00:15:59.915 "state": "online", 00:15:59.915 "raid_level": "concat", 00:15:59.915 "superblock": true, 00:15:59.915 "num_base_bdevs": 2, 00:15:59.915 "num_base_bdevs_discovered": 2, 00:15:59.915 "num_base_bdevs_operational": 2, 00:15:59.915 "base_bdevs_list": [ 00:15:59.915 { 00:15:59.915 "name": "BaseBdev1", 00:15:59.915 "uuid": "d0b3dead-ef24-5706-bd96-927d48d4063b", 00:15:59.915 "is_configured": true, 00:15:59.915 "data_offset": 2048, 00:15:59.915 "data_size": 63488 00:15:59.915 }, 00:15:59.915 { 00:15:59.915 "name": "BaseBdev2", 00:15:59.915 "uuid": "d60d13bb-4a66-5f05-9305-b94a29906728", 00:15:59.915 "is_configured": true, 00:15:59.915 "data_offset": 2048, 00:15:59.915 "data_size": 63488 00:15:59.915 } 00:15:59.915 ] 00:15:59.915 }' 00:15:59.915 10:58:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.915 10:58:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.486 10:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:00.486 10:58:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:00.486 [2024-07-25 10:58:07.451978] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.511 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.770 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.770 "name": "raid_bdev1", 00:16:01.770 "uuid": "c2220bb9-dfd8-4177-a628-8517feed2995", 00:16:01.770 "strip_size_kb": 64, 00:16:01.770 "state": "online", 00:16:01.770 "raid_level": "concat", 00:16:01.770 "superblock": true, 00:16:01.770 "num_base_bdevs": 2, 00:16:01.770 "num_base_bdevs_discovered": 2, 00:16:01.770 "num_base_bdevs_operational": 2, 00:16:01.770 "base_bdevs_list": [ 00:16:01.770 { 00:16:01.770 "name": "BaseBdev1", 00:16:01.770 "uuid": "d0b3dead-ef24-5706-bd96-927d48d4063b", 00:16:01.770 "is_configured": true, 00:16:01.770 "data_offset": 2048, 00:16:01.770 "data_size": 63488 00:16:01.770 }, 00:16:01.770 { 00:16:01.770 "name": "BaseBdev2", 00:16:01.770 "uuid": "d60d13bb-4a66-5f05-9305-b94a29906728", 00:16:01.770 "is_configured": true, 00:16:01.770 "data_offset": 2048, 00:16:01.770 "data_size": 63488 00:16:01.770 } 00:16:01.770 ] 00:16:01.770 }' 00:16:01.770 10:58:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.770 10:58:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.338 10:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:02.597 [2024-07-25 10:58:09.573855] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.597 [2024-07-25 10:58:09.573910] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.597 [2024-07-25 10:58:09.577209] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.597 [2024-07-25 10:58:09.577267] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.597 [2024-07-25 10:58:09.577306] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.597 [2024-07-25 10:58:09.577332] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:02.597 0 00:16:02.597 10:58:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3568305 00:16:02.597 10:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3568305 ']' 00:16:02.597 10:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3568305 00:16:02.597 10:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:02.597 10:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.597 10:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3568305 00:16:02.597 10:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:02.597 10:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:02.597 10:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3568305' 00:16:02.597 killing process with pid 3568305 00:16:02.597 10:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3568305 00:16:02.597 [2024-07-25 10:58:09.649478] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.597 10:58:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3568305 00:16:02.856 [2024-07-25 10:58:09.749318] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:04.761 10:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.xLIg0GM066 00:16:04.761 10:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:04.761 10:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:04.762 10:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:16:04.762 10:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:16:04.762 10:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:04.762 10:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:04.762 10:58:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:16:04.762 00:16:04.762 real 0m7.826s 00:16:04.762 user 0m10.935s 00:16:04.762 sys 0m1.164s 00:16:04.762 10:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:04.762 10:58:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.762 ************************************ 00:16:04.762 END TEST raid_write_error_test 00:16:04.762 ************************************ 00:16:04.762 10:58:11 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:16:04.762 10:58:11 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:16:04.762 10:58:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:04.762 10:58:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:04.762 10:58:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:04.762 ************************************ 00:16:04.762 START TEST raid_state_function_test 00:16:04.762 ************************************ 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3569727 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3569727' 00:16:04.762 Process raid pid: 3569727 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3569727 /var/tmp/spdk-raid.sock 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3569727 ']' 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:04.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:04.762 10:58:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.762 [2024-07-25 10:58:11.767906] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:04.762 [2024-07-25 10:58:11.768021] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:05.020 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.020 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:05.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.021 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:05.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.021 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:05.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.021 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:05.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.021 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:05.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.021 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:05.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.021 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:05.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.021 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:05.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.021 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:05.021 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:05.021 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:05.021 [2024-07-25 10:58:11.993797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.279 [2024-07-25 10:58:12.258567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.537 [2024-07-25 10:58:12.574518] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.537 [2024-07-25 10:58:12.574555] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:05.795 10:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:05.795 10:58:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:05.795 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:06.053 [2024-07-25 10:58:12.944913] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:06.053 [2024-07-25 10:58:12.944969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:06.053 [2024-07-25 10:58:12.944984] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:06.053 [2024-07-25 10:58:12.945001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:06.053 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:06.053 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:06.053 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:06.053 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:06.053 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:06.053 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:06.053 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:06.053 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:06.053 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:06.053 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:06.053 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.053 10:58:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.312 10:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.312 "name": "Existed_Raid", 00:16:06.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.312 "strip_size_kb": 0, 00:16:06.312 "state": "configuring", 00:16:06.312 "raid_level": "raid1", 00:16:06.312 "superblock": false, 00:16:06.312 "num_base_bdevs": 2, 00:16:06.312 "num_base_bdevs_discovered": 0, 00:16:06.312 "num_base_bdevs_operational": 2, 00:16:06.312 "base_bdevs_list": [ 00:16:06.312 { 00:16:06.312 "name": "BaseBdev1", 00:16:06.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.312 "is_configured": false, 00:16:06.312 "data_offset": 0, 00:16:06.312 "data_size": 0 00:16:06.312 }, 00:16:06.312 { 00:16:06.312 "name": "BaseBdev2", 00:16:06.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.312 "is_configured": false, 00:16:06.312 "data_offset": 0, 00:16:06.312 "data_size": 0 00:16:06.312 } 00:16:06.312 ] 00:16:06.312 }' 00:16:06.312 10:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.312 10:58:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.877 10:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:06.877 [2024-07-25 10:58:13.959506] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:06.877 [2024-07-25 10:58:13.959549] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:06.877 10:58:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:07.135 [2024-07-25 10:58:14.188165] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:07.135 [2024-07-25 10:58:14.188209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:07.135 [2024-07-25 10:58:14.188222] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.135 [2024-07-25 10:58:14.188238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.135 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:07.393 [2024-07-25 10:58:14.473265] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.393 BaseBdev1 00:16:07.393 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:07.393 10:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:07.393 10:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:07.393 10:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:07.393 10:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:07.393 10:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:07.393 10:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:07.650 10:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:07.909 [ 00:16:07.909 { 00:16:07.909 "name": "BaseBdev1", 00:16:07.909 "aliases": [ 00:16:07.909 "c7bc0339-4872-4505-88c5-960337a7da80" 00:16:07.909 ], 00:16:07.909 "product_name": "Malloc disk", 00:16:07.909 "block_size": 512, 00:16:07.909 "num_blocks": 65536, 00:16:07.909 "uuid": "c7bc0339-4872-4505-88c5-960337a7da80", 00:16:07.909 "assigned_rate_limits": { 00:16:07.909 "rw_ios_per_sec": 0, 00:16:07.909 "rw_mbytes_per_sec": 0, 00:16:07.909 "r_mbytes_per_sec": 0, 00:16:07.909 "w_mbytes_per_sec": 0 00:16:07.909 }, 00:16:07.909 "claimed": true, 00:16:07.909 "claim_type": "exclusive_write", 00:16:07.909 "zoned": false, 00:16:07.909 "supported_io_types": { 00:16:07.910 "read": true, 00:16:07.910 "write": true, 00:16:07.910 "unmap": true, 00:16:07.910 "flush": true, 00:16:07.910 "reset": true, 00:16:07.910 "nvme_admin": false, 00:16:07.910 "nvme_io": false, 00:16:07.910 "nvme_io_md": false, 00:16:07.910 "write_zeroes": true, 00:16:07.910 "zcopy": true, 00:16:07.910 "get_zone_info": false, 00:16:07.910 "zone_management": false, 00:16:07.910 "zone_append": false, 00:16:07.910 "compare": false, 00:16:07.910 "compare_and_write": false, 00:16:07.910 "abort": true, 00:16:07.910 "seek_hole": false, 00:16:07.910 "seek_data": false, 00:16:07.910 "copy": true, 00:16:07.910 "nvme_iov_md": false 00:16:07.910 }, 00:16:07.910 "memory_domains": [ 00:16:07.910 { 00:16:07.910 "dma_device_id": "system", 00:16:07.910 "dma_device_type": 1 00:16:07.910 }, 00:16:07.910 { 00:16:07.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.910 "dma_device_type": 2 00:16:07.910 } 00:16:07.910 ], 00:16:07.910 "driver_specific": {} 00:16:07.910 } 00:16:07.910 ] 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.910 10:58:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.169 10:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.169 "name": "Existed_Raid", 00:16:08.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.169 "strip_size_kb": 0, 00:16:08.169 "state": "configuring", 00:16:08.169 "raid_level": "raid1", 00:16:08.169 "superblock": false, 00:16:08.169 "num_base_bdevs": 2, 00:16:08.169 "num_base_bdevs_discovered": 1, 00:16:08.169 "num_base_bdevs_operational": 2, 00:16:08.169 "base_bdevs_list": [ 00:16:08.169 { 00:16:08.169 "name": "BaseBdev1", 00:16:08.169 "uuid": "c7bc0339-4872-4505-88c5-960337a7da80", 00:16:08.169 "is_configured": true, 00:16:08.169 "data_offset": 0, 00:16:08.169 "data_size": 65536 00:16:08.169 }, 00:16:08.169 { 00:16:08.169 "name": "BaseBdev2", 00:16:08.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.169 "is_configured": false, 00:16:08.169 "data_offset": 0, 00:16:08.169 "data_size": 0 00:16:08.169 } 00:16:08.169 ] 00:16:08.169 }' 00:16:08.169 10:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.169 10:58:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.737 10:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:08.996 [2024-07-25 10:58:15.949285] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:08.996 [2024-07-25 10:58:15.949341] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:08.996 10:58:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:09.256 [2024-07-25 10:58:16.177959] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.256 [2024-07-25 10:58:16.180266] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:09.256 [2024-07-25 10:58:16.180309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.256 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.515 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.515 "name": "Existed_Raid", 00:16:09.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.515 "strip_size_kb": 0, 00:16:09.515 "state": "configuring", 00:16:09.515 "raid_level": "raid1", 00:16:09.515 "superblock": false, 00:16:09.515 "num_base_bdevs": 2, 00:16:09.515 "num_base_bdevs_discovered": 1, 00:16:09.515 "num_base_bdevs_operational": 2, 00:16:09.515 "base_bdevs_list": [ 00:16:09.515 { 00:16:09.515 "name": "BaseBdev1", 00:16:09.515 "uuid": "c7bc0339-4872-4505-88c5-960337a7da80", 00:16:09.515 "is_configured": true, 00:16:09.515 "data_offset": 0, 00:16:09.515 "data_size": 65536 00:16:09.515 }, 00:16:09.515 { 00:16:09.515 "name": "BaseBdev2", 00:16:09.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.515 "is_configured": false, 00:16:09.515 "data_offset": 0, 00:16:09.515 "data_size": 0 00:16:09.515 } 00:16:09.515 ] 00:16:09.515 }' 00:16:09.515 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.515 10:58:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.082 10:58:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:10.342 [2024-07-25 10:58:17.252124] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.342 [2024-07-25 10:58:17.252186] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:10.342 [2024-07-25 10:58:17.252205] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:10.342 [2024-07-25 10:58:17.252539] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:10.342 [2024-07-25 10:58:17.252782] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:10.342 [2024-07-25 10:58:17.252800] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:10.342 [2024-07-25 10:58:17.253130] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.342 BaseBdev2 00:16:10.342 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:10.342 10:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:10.342 10:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:10.342 10:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:10.342 10:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:10.342 10:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:10.342 10:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:10.601 10:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:10.601 [ 00:16:10.601 { 00:16:10.601 "name": "BaseBdev2", 00:16:10.601 "aliases": [ 00:16:10.601 "8f11cdea-b605-4850-9511-b7f67fb29ca2" 00:16:10.601 ], 00:16:10.601 "product_name": "Malloc disk", 00:16:10.601 "block_size": 512, 00:16:10.601 "num_blocks": 65536, 00:16:10.601 "uuid": "8f11cdea-b605-4850-9511-b7f67fb29ca2", 00:16:10.601 "assigned_rate_limits": { 00:16:10.601 "rw_ios_per_sec": 0, 00:16:10.601 "rw_mbytes_per_sec": 0, 00:16:10.601 "r_mbytes_per_sec": 0, 00:16:10.601 "w_mbytes_per_sec": 0 00:16:10.601 }, 00:16:10.601 "claimed": true, 00:16:10.601 "claim_type": "exclusive_write", 00:16:10.601 "zoned": false, 00:16:10.601 "supported_io_types": { 00:16:10.601 "read": true, 00:16:10.601 "write": true, 00:16:10.601 "unmap": true, 00:16:10.601 "flush": true, 00:16:10.601 "reset": true, 00:16:10.601 "nvme_admin": false, 00:16:10.601 "nvme_io": false, 00:16:10.601 "nvme_io_md": false, 00:16:10.601 "write_zeroes": true, 00:16:10.601 "zcopy": true, 00:16:10.601 "get_zone_info": false, 00:16:10.601 "zone_management": false, 00:16:10.601 "zone_append": false, 00:16:10.601 "compare": false, 00:16:10.601 "compare_and_write": false, 00:16:10.601 "abort": true, 00:16:10.601 "seek_hole": false, 00:16:10.601 "seek_data": false, 00:16:10.601 "copy": true, 00:16:10.601 "nvme_iov_md": false 00:16:10.601 }, 00:16:10.602 "memory_domains": [ 00:16:10.602 { 00:16:10.602 "dma_device_id": "system", 00:16:10.602 "dma_device_type": 1 00:16:10.602 }, 00:16:10.602 { 00:16:10.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.602 "dma_device_type": 2 00:16:10.602 } 00:16:10.602 ], 00:16:10.602 "driver_specific": {} 00:16:10.602 } 00:16:10.602 ] 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.861 "name": "Existed_Raid", 00:16:10.861 "uuid": "c7ae18c1-e622-453d-b5f3-b761b09f6093", 00:16:10.861 "strip_size_kb": 0, 00:16:10.861 "state": "online", 00:16:10.861 "raid_level": "raid1", 00:16:10.861 "superblock": false, 00:16:10.861 "num_base_bdevs": 2, 00:16:10.861 "num_base_bdevs_discovered": 2, 00:16:10.861 "num_base_bdevs_operational": 2, 00:16:10.861 "base_bdevs_list": [ 00:16:10.861 { 00:16:10.861 "name": "BaseBdev1", 00:16:10.861 "uuid": "c7bc0339-4872-4505-88c5-960337a7da80", 00:16:10.861 "is_configured": true, 00:16:10.861 "data_offset": 0, 00:16:10.861 "data_size": 65536 00:16:10.861 }, 00:16:10.861 { 00:16:10.861 "name": "BaseBdev2", 00:16:10.861 "uuid": "8f11cdea-b605-4850-9511-b7f67fb29ca2", 00:16:10.861 "is_configured": true, 00:16:10.861 "data_offset": 0, 00:16:10.861 "data_size": 65536 00:16:10.861 } 00:16:10.861 ] 00:16:10.861 }' 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.861 10:58:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.428 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:11.428 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:11.428 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:11.428 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:11.428 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:11.428 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:11.428 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:11.428 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:11.687 [2024-07-25 10:58:18.716488] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.687 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:11.687 "name": "Existed_Raid", 00:16:11.687 "aliases": [ 00:16:11.687 "c7ae18c1-e622-453d-b5f3-b761b09f6093" 00:16:11.687 ], 00:16:11.687 "product_name": "Raid Volume", 00:16:11.687 "block_size": 512, 00:16:11.687 "num_blocks": 65536, 00:16:11.687 "uuid": "c7ae18c1-e622-453d-b5f3-b761b09f6093", 00:16:11.687 "assigned_rate_limits": { 00:16:11.687 "rw_ios_per_sec": 0, 00:16:11.687 "rw_mbytes_per_sec": 0, 00:16:11.687 "r_mbytes_per_sec": 0, 00:16:11.687 "w_mbytes_per_sec": 0 00:16:11.687 }, 00:16:11.687 "claimed": false, 00:16:11.687 "zoned": false, 00:16:11.687 "supported_io_types": { 00:16:11.687 "read": true, 00:16:11.687 "write": true, 00:16:11.687 "unmap": false, 00:16:11.687 "flush": false, 00:16:11.687 "reset": true, 00:16:11.687 "nvme_admin": false, 00:16:11.687 "nvme_io": false, 00:16:11.687 "nvme_io_md": false, 00:16:11.687 "write_zeroes": true, 00:16:11.687 "zcopy": false, 00:16:11.687 "get_zone_info": false, 00:16:11.687 "zone_management": false, 00:16:11.687 "zone_append": false, 00:16:11.687 "compare": false, 00:16:11.687 "compare_and_write": false, 00:16:11.687 "abort": false, 00:16:11.687 "seek_hole": false, 00:16:11.687 "seek_data": false, 00:16:11.687 "copy": false, 00:16:11.687 "nvme_iov_md": false 00:16:11.687 }, 00:16:11.687 "memory_domains": [ 00:16:11.687 { 00:16:11.687 "dma_device_id": "system", 00:16:11.687 "dma_device_type": 1 00:16:11.687 }, 00:16:11.687 { 00:16:11.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.687 "dma_device_type": 2 00:16:11.687 }, 00:16:11.687 { 00:16:11.687 "dma_device_id": "system", 00:16:11.687 "dma_device_type": 1 00:16:11.688 }, 00:16:11.688 { 00:16:11.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.688 "dma_device_type": 2 00:16:11.688 } 00:16:11.688 ], 00:16:11.688 "driver_specific": { 00:16:11.688 "raid": { 00:16:11.688 "uuid": "c7ae18c1-e622-453d-b5f3-b761b09f6093", 00:16:11.688 "strip_size_kb": 0, 00:16:11.688 "state": "online", 00:16:11.688 "raid_level": "raid1", 00:16:11.688 "superblock": false, 00:16:11.688 "num_base_bdevs": 2, 00:16:11.688 "num_base_bdevs_discovered": 2, 00:16:11.688 "num_base_bdevs_operational": 2, 00:16:11.688 "base_bdevs_list": [ 00:16:11.688 { 00:16:11.688 "name": "BaseBdev1", 00:16:11.688 "uuid": "c7bc0339-4872-4505-88c5-960337a7da80", 00:16:11.688 "is_configured": true, 00:16:11.688 "data_offset": 0, 00:16:11.688 "data_size": 65536 00:16:11.688 }, 00:16:11.688 { 00:16:11.688 "name": "BaseBdev2", 00:16:11.688 "uuid": "8f11cdea-b605-4850-9511-b7f67fb29ca2", 00:16:11.688 "is_configured": true, 00:16:11.688 "data_offset": 0, 00:16:11.688 "data_size": 65536 00:16:11.688 } 00:16:11.688 ] 00:16:11.688 } 00:16:11.688 } 00:16:11.688 }' 00:16:11.688 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:11.688 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:11.688 BaseBdev2' 00:16:11.688 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:11.688 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:11.688 10:58:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:11.947 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:11.947 "name": "BaseBdev1", 00:16:11.947 "aliases": [ 00:16:11.947 "c7bc0339-4872-4505-88c5-960337a7da80" 00:16:11.947 ], 00:16:11.947 "product_name": "Malloc disk", 00:16:11.947 "block_size": 512, 00:16:11.947 "num_blocks": 65536, 00:16:11.947 "uuid": "c7bc0339-4872-4505-88c5-960337a7da80", 00:16:11.947 "assigned_rate_limits": { 00:16:11.947 "rw_ios_per_sec": 0, 00:16:11.947 "rw_mbytes_per_sec": 0, 00:16:11.947 "r_mbytes_per_sec": 0, 00:16:11.947 "w_mbytes_per_sec": 0 00:16:11.947 }, 00:16:11.947 "claimed": true, 00:16:11.947 "claim_type": "exclusive_write", 00:16:11.947 "zoned": false, 00:16:11.947 "supported_io_types": { 00:16:11.947 "read": true, 00:16:11.947 "write": true, 00:16:11.947 "unmap": true, 00:16:11.947 "flush": true, 00:16:11.947 "reset": true, 00:16:11.947 "nvme_admin": false, 00:16:11.947 "nvme_io": false, 00:16:11.947 "nvme_io_md": false, 00:16:11.947 "write_zeroes": true, 00:16:11.947 "zcopy": true, 00:16:11.947 "get_zone_info": false, 00:16:11.947 "zone_management": false, 00:16:11.947 "zone_append": false, 00:16:11.947 "compare": false, 00:16:11.947 "compare_and_write": false, 00:16:11.947 "abort": true, 00:16:11.947 "seek_hole": false, 00:16:11.947 "seek_data": false, 00:16:11.947 "copy": true, 00:16:11.947 "nvme_iov_md": false 00:16:11.947 }, 00:16:11.947 "memory_domains": [ 00:16:11.947 { 00:16:11.947 "dma_device_id": "system", 00:16:11.947 "dma_device_type": 1 00:16:11.947 }, 00:16:11.947 { 00:16:11.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.947 "dma_device_type": 2 00:16:11.947 } 00:16:11.947 ], 00:16:11.947 "driver_specific": {} 00:16:11.947 }' 00:16:11.947 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:11.947 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.206 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.206 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.206 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.206 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.206 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.206 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.206 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.206 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.206 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.465 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.465 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.465 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:12.465 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.465 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.465 "name": "BaseBdev2", 00:16:12.465 "aliases": [ 00:16:12.465 "8f11cdea-b605-4850-9511-b7f67fb29ca2" 00:16:12.465 ], 00:16:12.465 "product_name": "Malloc disk", 00:16:12.465 "block_size": 512, 00:16:12.465 "num_blocks": 65536, 00:16:12.465 "uuid": "8f11cdea-b605-4850-9511-b7f67fb29ca2", 00:16:12.465 "assigned_rate_limits": { 00:16:12.465 "rw_ios_per_sec": 0, 00:16:12.465 "rw_mbytes_per_sec": 0, 00:16:12.465 "r_mbytes_per_sec": 0, 00:16:12.465 "w_mbytes_per_sec": 0 00:16:12.465 }, 00:16:12.465 "claimed": true, 00:16:12.465 "claim_type": "exclusive_write", 00:16:12.465 "zoned": false, 00:16:12.465 "supported_io_types": { 00:16:12.465 "read": true, 00:16:12.465 "write": true, 00:16:12.465 "unmap": true, 00:16:12.465 "flush": true, 00:16:12.465 "reset": true, 00:16:12.465 "nvme_admin": false, 00:16:12.465 "nvme_io": false, 00:16:12.465 "nvme_io_md": false, 00:16:12.465 "write_zeroes": true, 00:16:12.465 "zcopy": true, 00:16:12.465 "get_zone_info": false, 00:16:12.465 "zone_management": false, 00:16:12.465 "zone_append": false, 00:16:12.465 "compare": false, 00:16:12.465 "compare_and_write": false, 00:16:12.465 "abort": true, 00:16:12.465 "seek_hole": false, 00:16:12.465 "seek_data": false, 00:16:12.465 "copy": true, 00:16:12.465 "nvme_iov_md": false 00:16:12.465 }, 00:16:12.465 "memory_domains": [ 00:16:12.465 { 00:16:12.465 "dma_device_id": "system", 00:16:12.465 "dma_device_type": 1 00:16:12.465 }, 00:16:12.465 { 00:16:12.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.465 "dma_device_type": 2 00:16:12.465 } 00:16:12.465 ], 00:16:12.465 "driver_specific": {} 00:16:12.465 }' 00:16:12.465 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.723 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.723 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.723 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.723 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.723 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.723 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.723 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.723 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.983 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.983 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.983 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.983 10:58:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:13.242 [2024-07-25 10:58:20.128040] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.242 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.502 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:13.502 "name": "Existed_Raid", 00:16:13.502 "uuid": "c7ae18c1-e622-453d-b5f3-b761b09f6093", 00:16:13.502 "strip_size_kb": 0, 00:16:13.502 "state": "online", 00:16:13.502 "raid_level": "raid1", 00:16:13.502 "superblock": false, 00:16:13.502 "num_base_bdevs": 2, 00:16:13.502 "num_base_bdevs_discovered": 1, 00:16:13.502 "num_base_bdevs_operational": 1, 00:16:13.502 "base_bdevs_list": [ 00:16:13.502 { 00:16:13.502 "name": null, 00:16:13.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.502 "is_configured": false, 00:16:13.502 "data_offset": 0, 00:16:13.502 "data_size": 65536 00:16:13.502 }, 00:16:13.502 { 00:16:13.502 "name": "BaseBdev2", 00:16:13.502 "uuid": "8f11cdea-b605-4850-9511-b7f67fb29ca2", 00:16:13.502 "is_configured": true, 00:16:13.502 "data_offset": 0, 00:16:13.502 "data_size": 65536 00:16:13.502 } 00:16:13.502 ] 00:16:13.502 }' 00:16:13.502 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:13.502 10:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.069 10:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:14.069 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:14.069 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.069 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:14.327 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:14.327 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:14.327 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:14.327 [2024-07-25 10:58:21.443506] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:14.327 [2024-07-25 10:58:21.443612] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.585 [2024-07-25 10:58:21.576005] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.585 [2024-07-25 10:58:21.576058] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.585 [2024-07-25 10:58:21.576076] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:14.585 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:14.585 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:14.585 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.585 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3569727 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3569727 ']' 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3569727 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3569727 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3569727' 00:16:14.844 killing process with pid 3569727 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3569727 00:16:14.844 [2024-07-25 10:58:21.878258] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.844 10:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3569727 00:16:14.844 [2024-07-25 10:58:21.902325] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:16.775 00:16:16.775 real 0m11.990s 00:16:16.775 user 0m19.561s 00:16:16.775 sys 0m2.100s 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.775 ************************************ 00:16:16.775 END TEST raid_state_function_test 00:16:16.775 ************************************ 00:16:16.775 10:58:23 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:16:16.775 10:58:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:16.775 10:58:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.775 10:58:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.775 ************************************ 00:16:16.775 START TEST raid_state_function_test_sb 00:16:16.775 ************************************ 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3572055 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3572055' 00:16:16.775 Process raid pid: 3572055 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3572055 /var/tmp/spdk-raid.sock 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3572055 ']' 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:16.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:16.775 10:58:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.775 [2024-07-25 10:58:23.846903] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:16.775 [2024-07-25 10:58:23.847022] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.035 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.035 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.035 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.035 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.035 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.035 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.035 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.035 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.035 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.035 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.035 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.035 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:17.035 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:17.036 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:17.036 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:17.036 [2024-07-25 10:58:24.073648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.295 [2024-07-25 10:58:24.339148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.554 [2024-07-25 10:58:24.666398] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.554 [2024-07-25 10:58:24.666433] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.813 10:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:17.813 10:58:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:17.813 10:58:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:18.072 [2024-07-25 10:58:25.048469] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:18.072 [2024-07-25 10:58:25.048524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:18.072 [2024-07-25 10:58:25.048539] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.072 [2024-07-25 10:58:25.048555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.072 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:18.072 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:18.072 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:18.072 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:18.072 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:18.072 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:18.072 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.072 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.072 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.072 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.072 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.072 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.331 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.331 "name": "Existed_Raid", 00:16:18.331 "uuid": "af32a497-0f67-4c0b-bbdd-3057b83275f5", 00:16:18.331 "strip_size_kb": 0, 00:16:18.331 "state": "configuring", 00:16:18.331 "raid_level": "raid1", 00:16:18.331 "superblock": true, 00:16:18.331 "num_base_bdevs": 2, 00:16:18.331 "num_base_bdevs_discovered": 0, 00:16:18.331 "num_base_bdevs_operational": 2, 00:16:18.331 "base_bdevs_list": [ 00:16:18.331 { 00:16:18.331 "name": "BaseBdev1", 00:16:18.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.331 "is_configured": false, 00:16:18.331 "data_offset": 0, 00:16:18.331 "data_size": 0 00:16:18.331 }, 00:16:18.331 { 00:16:18.331 "name": "BaseBdev2", 00:16:18.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.331 "is_configured": false, 00:16:18.331 "data_offset": 0, 00:16:18.331 "data_size": 0 00:16:18.331 } 00:16:18.331 ] 00:16:18.331 }' 00:16:18.331 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.331 10:58:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.898 10:58:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:19.157 [2024-07-25 10:58:26.047192] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:19.157 [2024-07-25 10:58:26.047235] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:19.157 10:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:19.416 [2024-07-25 10:58:26.275850] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:19.416 [2024-07-25 10:58:26.275897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:19.416 [2024-07-25 10:58:26.275910] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:19.416 [2024-07-25 10:58:26.275927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:19.416 10:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:19.674 [2024-07-25 10:58:26.557698] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:19.674 BaseBdev1 00:16:19.674 10:58:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:19.674 10:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:19.674 10:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:19.674 10:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:19.674 10:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:19.674 10:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:19.674 10:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:19.933 10:58:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:19.933 [ 00:16:19.933 { 00:16:19.933 "name": "BaseBdev1", 00:16:19.933 "aliases": [ 00:16:19.934 "4a29583c-710b-4531-bd10-bf79a6526ddc" 00:16:19.934 ], 00:16:19.934 "product_name": "Malloc disk", 00:16:19.934 "block_size": 512, 00:16:19.934 "num_blocks": 65536, 00:16:19.934 "uuid": "4a29583c-710b-4531-bd10-bf79a6526ddc", 00:16:19.934 "assigned_rate_limits": { 00:16:19.934 "rw_ios_per_sec": 0, 00:16:19.934 "rw_mbytes_per_sec": 0, 00:16:19.934 "r_mbytes_per_sec": 0, 00:16:19.934 "w_mbytes_per_sec": 0 00:16:19.934 }, 00:16:19.934 "claimed": true, 00:16:19.934 "claim_type": "exclusive_write", 00:16:19.934 "zoned": false, 00:16:19.934 "supported_io_types": { 00:16:19.934 "read": true, 00:16:19.934 "write": true, 00:16:19.934 "unmap": true, 00:16:19.934 "flush": true, 00:16:19.934 "reset": true, 00:16:19.934 "nvme_admin": false, 00:16:19.934 "nvme_io": false, 00:16:19.934 "nvme_io_md": false, 00:16:19.934 "write_zeroes": true, 00:16:19.934 "zcopy": true, 00:16:19.934 "get_zone_info": false, 00:16:19.934 "zone_management": false, 00:16:19.934 "zone_append": false, 00:16:19.934 "compare": false, 00:16:19.934 "compare_and_write": false, 00:16:19.934 "abort": true, 00:16:19.934 "seek_hole": false, 00:16:19.934 "seek_data": false, 00:16:19.934 "copy": true, 00:16:19.934 "nvme_iov_md": false 00:16:19.934 }, 00:16:19.934 "memory_domains": [ 00:16:19.934 { 00:16:19.934 "dma_device_id": "system", 00:16:19.934 "dma_device_type": 1 00:16:19.934 }, 00:16:19.934 { 00:16:19.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.934 "dma_device_type": 2 00:16:19.934 } 00:16:19.934 ], 00:16:19.934 "driver_specific": {} 00:16:19.934 } 00:16:19.934 ] 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.934 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.193 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.193 "name": "Existed_Raid", 00:16:20.193 "uuid": "652d74c0-c407-4111-8d46-a534ebc6e77a", 00:16:20.193 "strip_size_kb": 0, 00:16:20.193 "state": "configuring", 00:16:20.193 "raid_level": "raid1", 00:16:20.193 "superblock": true, 00:16:20.193 "num_base_bdevs": 2, 00:16:20.193 "num_base_bdevs_discovered": 1, 00:16:20.193 "num_base_bdevs_operational": 2, 00:16:20.193 "base_bdevs_list": [ 00:16:20.193 { 00:16:20.193 "name": "BaseBdev1", 00:16:20.193 "uuid": "4a29583c-710b-4531-bd10-bf79a6526ddc", 00:16:20.193 "is_configured": true, 00:16:20.193 "data_offset": 2048, 00:16:20.193 "data_size": 63488 00:16:20.193 }, 00:16:20.193 { 00:16:20.193 "name": "BaseBdev2", 00:16:20.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.193 "is_configured": false, 00:16:20.193 "data_offset": 0, 00:16:20.193 "data_size": 0 00:16:20.193 } 00:16:20.193 ] 00:16:20.193 }' 00:16:20.193 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.193 10:58:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.760 10:58:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:21.017 [2024-07-25 10:58:28.025710] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.017 [2024-07-25 10:58:28.025767] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:21.017 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:21.276 [2024-07-25 10:58:28.266441] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:21.276 [2024-07-25 10:58:28.268786] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:21.276 [2024-07-25 10:58:28.268830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.276 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.535 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.535 "name": "Existed_Raid", 00:16:21.535 "uuid": "2118d6c4-d352-47ac-b81b-5081382f639c", 00:16:21.535 "strip_size_kb": 0, 00:16:21.535 "state": "configuring", 00:16:21.535 "raid_level": "raid1", 00:16:21.535 "superblock": true, 00:16:21.535 "num_base_bdevs": 2, 00:16:21.535 "num_base_bdevs_discovered": 1, 00:16:21.535 "num_base_bdevs_operational": 2, 00:16:21.535 "base_bdevs_list": [ 00:16:21.535 { 00:16:21.535 "name": "BaseBdev1", 00:16:21.535 "uuid": "4a29583c-710b-4531-bd10-bf79a6526ddc", 00:16:21.535 "is_configured": true, 00:16:21.535 "data_offset": 2048, 00:16:21.535 "data_size": 63488 00:16:21.535 }, 00:16:21.535 { 00:16:21.535 "name": "BaseBdev2", 00:16:21.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.535 "is_configured": false, 00:16:21.535 "data_offset": 0, 00:16:21.535 "data_size": 0 00:16:21.535 } 00:16:21.535 ] 00:16:21.535 }' 00:16:21.535 10:58:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.535 10:58:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.103 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:22.363 [2024-07-25 10:58:29.351406] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:22.363 [2024-07-25 10:58:29.351672] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:22.363 [2024-07-25 10:58:29.351695] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:22.363 [2024-07-25 10:58:29.352017] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:22.363 [2024-07-25 10:58:29.352243] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:22.363 [2024-07-25 10:58:29.352262] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:22.363 [2024-07-25 10:58:29.352459] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.363 BaseBdev2 00:16:22.363 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:22.363 10:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:22.363 10:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:22.363 10:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:16:22.363 10:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:22.363 10:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:22.363 10:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:22.622 10:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:22.881 [ 00:16:22.881 { 00:16:22.881 "name": "BaseBdev2", 00:16:22.881 "aliases": [ 00:16:22.881 "fb52ca1c-4797-4f92-9f1c-f446effad7ad" 00:16:22.881 ], 00:16:22.881 "product_name": "Malloc disk", 00:16:22.881 "block_size": 512, 00:16:22.881 "num_blocks": 65536, 00:16:22.881 "uuid": "fb52ca1c-4797-4f92-9f1c-f446effad7ad", 00:16:22.881 "assigned_rate_limits": { 00:16:22.881 "rw_ios_per_sec": 0, 00:16:22.881 "rw_mbytes_per_sec": 0, 00:16:22.881 "r_mbytes_per_sec": 0, 00:16:22.881 "w_mbytes_per_sec": 0 00:16:22.881 }, 00:16:22.881 "claimed": true, 00:16:22.881 "claim_type": "exclusive_write", 00:16:22.881 "zoned": false, 00:16:22.881 "supported_io_types": { 00:16:22.881 "read": true, 00:16:22.881 "write": true, 00:16:22.881 "unmap": true, 00:16:22.881 "flush": true, 00:16:22.881 "reset": true, 00:16:22.881 "nvme_admin": false, 00:16:22.881 "nvme_io": false, 00:16:22.881 "nvme_io_md": false, 00:16:22.881 "write_zeroes": true, 00:16:22.881 "zcopy": true, 00:16:22.881 "get_zone_info": false, 00:16:22.881 "zone_management": false, 00:16:22.881 "zone_append": false, 00:16:22.881 "compare": false, 00:16:22.881 "compare_and_write": false, 00:16:22.881 "abort": true, 00:16:22.881 "seek_hole": false, 00:16:22.881 "seek_data": false, 00:16:22.881 "copy": true, 00:16:22.881 "nvme_iov_md": false 00:16:22.881 }, 00:16:22.881 "memory_domains": [ 00:16:22.881 { 00:16:22.881 "dma_device_id": "system", 00:16:22.881 "dma_device_type": 1 00:16:22.881 }, 00:16:22.881 { 00:16:22.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.881 "dma_device_type": 2 00:16:22.881 } 00:16:22.881 ], 00:16:22.881 "driver_specific": {} 00:16:22.881 } 00:16:22.881 ] 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.881 10:58:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.140 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.140 "name": "Existed_Raid", 00:16:23.140 "uuid": "2118d6c4-d352-47ac-b81b-5081382f639c", 00:16:23.140 "strip_size_kb": 0, 00:16:23.140 "state": "online", 00:16:23.140 "raid_level": "raid1", 00:16:23.140 "superblock": true, 00:16:23.140 "num_base_bdevs": 2, 00:16:23.140 "num_base_bdevs_discovered": 2, 00:16:23.140 "num_base_bdevs_operational": 2, 00:16:23.140 "base_bdevs_list": [ 00:16:23.140 { 00:16:23.140 "name": "BaseBdev1", 00:16:23.140 "uuid": "4a29583c-710b-4531-bd10-bf79a6526ddc", 00:16:23.140 "is_configured": true, 00:16:23.140 "data_offset": 2048, 00:16:23.140 "data_size": 63488 00:16:23.140 }, 00:16:23.140 { 00:16:23.140 "name": "BaseBdev2", 00:16:23.140 "uuid": "fb52ca1c-4797-4f92-9f1c-f446effad7ad", 00:16:23.140 "is_configured": true, 00:16:23.140 "data_offset": 2048, 00:16:23.140 "data_size": 63488 00:16:23.140 } 00:16:23.140 ] 00:16:23.140 }' 00:16:23.140 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.140 10:58:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.709 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:23.709 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:23.709 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:23.709 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:23.709 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:23.709 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:23.709 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:23.709 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:23.709 [2024-07-25 10:58:30.779634] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.709 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:23.709 "name": "Existed_Raid", 00:16:23.709 "aliases": [ 00:16:23.709 "2118d6c4-d352-47ac-b81b-5081382f639c" 00:16:23.709 ], 00:16:23.709 "product_name": "Raid Volume", 00:16:23.709 "block_size": 512, 00:16:23.709 "num_blocks": 63488, 00:16:23.709 "uuid": "2118d6c4-d352-47ac-b81b-5081382f639c", 00:16:23.709 "assigned_rate_limits": { 00:16:23.709 "rw_ios_per_sec": 0, 00:16:23.709 "rw_mbytes_per_sec": 0, 00:16:23.709 "r_mbytes_per_sec": 0, 00:16:23.709 "w_mbytes_per_sec": 0 00:16:23.709 }, 00:16:23.709 "claimed": false, 00:16:23.709 "zoned": false, 00:16:23.709 "supported_io_types": { 00:16:23.709 "read": true, 00:16:23.709 "write": true, 00:16:23.709 "unmap": false, 00:16:23.709 "flush": false, 00:16:23.709 "reset": true, 00:16:23.709 "nvme_admin": false, 00:16:23.709 "nvme_io": false, 00:16:23.709 "nvme_io_md": false, 00:16:23.709 "write_zeroes": true, 00:16:23.709 "zcopy": false, 00:16:23.709 "get_zone_info": false, 00:16:23.709 "zone_management": false, 00:16:23.709 "zone_append": false, 00:16:23.709 "compare": false, 00:16:23.709 "compare_and_write": false, 00:16:23.709 "abort": false, 00:16:23.709 "seek_hole": false, 00:16:23.709 "seek_data": false, 00:16:23.709 "copy": false, 00:16:23.709 "nvme_iov_md": false 00:16:23.709 }, 00:16:23.709 "memory_domains": [ 00:16:23.709 { 00:16:23.709 "dma_device_id": "system", 00:16:23.709 "dma_device_type": 1 00:16:23.709 }, 00:16:23.709 { 00:16:23.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.709 "dma_device_type": 2 00:16:23.709 }, 00:16:23.709 { 00:16:23.709 "dma_device_id": "system", 00:16:23.709 "dma_device_type": 1 00:16:23.709 }, 00:16:23.709 { 00:16:23.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.709 "dma_device_type": 2 00:16:23.709 } 00:16:23.709 ], 00:16:23.709 "driver_specific": { 00:16:23.709 "raid": { 00:16:23.709 "uuid": "2118d6c4-d352-47ac-b81b-5081382f639c", 00:16:23.709 "strip_size_kb": 0, 00:16:23.709 "state": "online", 00:16:23.709 "raid_level": "raid1", 00:16:23.709 "superblock": true, 00:16:23.709 "num_base_bdevs": 2, 00:16:23.709 "num_base_bdevs_discovered": 2, 00:16:23.709 "num_base_bdevs_operational": 2, 00:16:23.709 "base_bdevs_list": [ 00:16:23.709 { 00:16:23.709 "name": "BaseBdev1", 00:16:23.709 "uuid": "4a29583c-710b-4531-bd10-bf79a6526ddc", 00:16:23.709 "is_configured": true, 00:16:23.709 "data_offset": 2048, 00:16:23.709 "data_size": 63488 00:16:23.709 }, 00:16:23.710 { 00:16:23.710 "name": "BaseBdev2", 00:16:23.710 "uuid": "fb52ca1c-4797-4f92-9f1c-f446effad7ad", 00:16:23.710 "is_configured": true, 00:16:23.710 "data_offset": 2048, 00:16:23.710 "data_size": 63488 00:16:23.710 } 00:16:23.710 ] 00:16:23.710 } 00:16:23.710 } 00:16:23.710 }' 00:16:23.710 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:23.969 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:23.969 BaseBdev2' 00:16:23.969 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.969 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:23.969 10:58:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.969 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.969 "name": "BaseBdev1", 00:16:23.969 "aliases": [ 00:16:23.969 "4a29583c-710b-4531-bd10-bf79a6526ddc" 00:16:23.969 ], 00:16:23.969 "product_name": "Malloc disk", 00:16:23.969 "block_size": 512, 00:16:23.969 "num_blocks": 65536, 00:16:23.969 "uuid": "4a29583c-710b-4531-bd10-bf79a6526ddc", 00:16:23.969 "assigned_rate_limits": { 00:16:23.969 "rw_ios_per_sec": 0, 00:16:23.969 "rw_mbytes_per_sec": 0, 00:16:23.969 "r_mbytes_per_sec": 0, 00:16:23.969 "w_mbytes_per_sec": 0 00:16:23.969 }, 00:16:23.969 "claimed": true, 00:16:23.969 "claim_type": "exclusive_write", 00:16:23.969 "zoned": false, 00:16:23.969 "supported_io_types": { 00:16:23.969 "read": true, 00:16:23.969 "write": true, 00:16:23.969 "unmap": true, 00:16:23.969 "flush": true, 00:16:23.969 "reset": true, 00:16:23.969 "nvme_admin": false, 00:16:23.969 "nvme_io": false, 00:16:23.969 "nvme_io_md": false, 00:16:23.969 "write_zeroes": true, 00:16:23.969 "zcopy": true, 00:16:23.969 "get_zone_info": false, 00:16:23.969 "zone_management": false, 00:16:23.969 "zone_append": false, 00:16:23.969 "compare": false, 00:16:23.969 "compare_and_write": false, 00:16:23.969 "abort": true, 00:16:23.969 "seek_hole": false, 00:16:23.969 "seek_data": false, 00:16:23.969 "copy": true, 00:16:23.969 "nvme_iov_md": false 00:16:23.969 }, 00:16:23.969 "memory_domains": [ 00:16:23.969 { 00:16:23.969 "dma_device_id": "system", 00:16:23.969 "dma_device_type": 1 00:16:23.969 }, 00:16:23.969 { 00:16:23.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.969 "dma_device_type": 2 00:16:23.969 } 00:16:23.969 ], 00:16:23.969 "driver_specific": {} 00:16:23.969 }' 00:16:23.969 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.228 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.228 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:24.228 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.228 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.228 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:24.228 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.228 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.228 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:24.228 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.487 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:24.487 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:24.487 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:24.487 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:24.487 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:24.745 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:24.745 "name": "BaseBdev2", 00:16:24.745 "aliases": [ 00:16:24.745 "fb52ca1c-4797-4f92-9f1c-f446effad7ad" 00:16:24.745 ], 00:16:24.745 "product_name": "Malloc disk", 00:16:24.745 "block_size": 512, 00:16:24.745 "num_blocks": 65536, 00:16:24.745 "uuid": "fb52ca1c-4797-4f92-9f1c-f446effad7ad", 00:16:24.745 "assigned_rate_limits": { 00:16:24.745 "rw_ios_per_sec": 0, 00:16:24.745 "rw_mbytes_per_sec": 0, 00:16:24.745 "r_mbytes_per_sec": 0, 00:16:24.745 "w_mbytes_per_sec": 0 00:16:24.745 }, 00:16:24.745 "claimed": true, 00:16:24.745 "claim_type": "exclusive_write", 00:16:24.745 "zoned": false, 00:16:24.745 "supported_io_types": { 00:16:24.745 "read": true, 00:16:24.745 "write": true, 00:16:24.745 "unmap": true, 00:16:24.745 "flush": true, 00:16:24.745 "reset": true, 00:16:24.745 "nvme_admin": false, 00:16:24.745 "nvme_io": false, 00:16:24.745 "nvme_io_md": false, 00:16:24.745 "write_zeroes": true, 00:16:24.745 "zcopy": true, 00:16:24.745 "get_zone_info": false, 00:16:24.745 "zone_management": false, 00:16:24.745 "zone_append": false, 00:16:24.745 "compare": false, 00:16:24.745 "compare_and_write": false, 00:16:24.745 "abort": true, 00:16:24.745 "seek_hole": false, 00:16:24.745 "seek_data": false, 00:16:24.745 "copy": true, 00:16:24.745 "nvme_iov_md": false 00:16:24.745 }, 00:16:24.745 "memory_domains": [ 00:16:24.745 { 00:16:24.745 "dma_device_id": "system", 00:16:24.745 "dma_device_type": 1 00:16:24.745 }, 00:16:24.745 { 00:16:24.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.745 "dma_device_type": 2 00:16:24.745 } 00:16:24.745 ], 00:16:24.745 "driver_specific": {} 00:16:24.745 }' 00:16:24.745 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.745 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:24.745 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:24.745 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.745 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:24.745 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:24.745 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:24.745 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.004 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.004 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.004 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.004 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.004 10:58:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:25.264 [2024-07-25 10:58:32.187243] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.264 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.523 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.523 "name": "Existed_Raid", 00:16:25.523 "uuid": "2118d6c4-d352-47ac-b81b-5081382f639c", 00:16:25.523 "strip_size_kb": 0, 00:16:25.523 "state": "online", 00:16:25.523 "raid_level": "raid1", 00:16:25.523 "superblock": true, 00:16:25.523 "num_base_bdevs": 2, 00:16:25.523 "num_base_bdevs_discovered": 1, 00:16:25.523 "num_base_bdevs_operational": 1, 00:16:25.523 "base_bdevs_list": [ 00:16:25.523 { 00:16:25.523 "name": null, 00:16:25.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.523 "is_configured": false, 00:16:25.523 "data_offset": 2048, 00:16:25.523 "data_size": 63488 00:16:25.523 }, 00:16:25.523 { 00:16:25.523 "name": "BaseBdev2", 00:16:25.523 "uuid": "fb52ca1c-4797-4f92-9f1c-f446effad7ad", 00:16:25.523 "is_configured": true, 00:16:25.523 "data_offset": 2048, 00:16:25.523 "data_size": 63488 00:16:25.523 } 00:16:25.523 ] 00:16:25.523 }' 00:16:25.523 10:58:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.523 10:58:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.090 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:26.090 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:26.090 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.090 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:26.350 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:26.350 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:26.350 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:26.609 [2024-07-25 10:58:33.476724] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:26.609 [2024-07-25 10:58:33.476834] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:26.609 [2024-07-25 10:58:33.615821] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:26.609 [2024-07-25 10:58:33.615877] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:26.609 [2024-07-25 10:58:33.615895] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:26.609 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:26.609 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:26.609 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.609 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3572055 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3572055 ']' 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3572055 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3572055 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3572055' 00:16:26.868 killing process with pid 3572055 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3572055 00:16:26.868 [2024-07-25 10:58:33.923224] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:26.868 10:58:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3572055 00:16:26.868 [2024-07-25 10:58:33.947538] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:28.774 10:58:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:28.774 00:16:28.774 real 0m11.896s 00:16:28.774 user 0m19.475s 00:16:28.774 sys 0m2.045s 00:16:28.774 10:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:28.774 10:58:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.774 ************************************ 00:16:28.774 END TEST raid_state_function_test_sb 00:16:28.774 ************************************ 00:16:28.774 10:58:35 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:16:28.774 10:58:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:28.774 10:58:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.774 10:58:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:28.774 ************************************ 00:16:28.774 START TEST raid_superblock_test 00:16:28.774 ************************************ 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3574308 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3574308 /var/tmp/spdk-raid.sock 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3574308 ']' 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:28.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:28.774 10:58:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.774 [2024-07-25 10:58:35.828004] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:28.774 [2024-07-25 10:58:35.828116] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3574308 ] 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:29.034 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:29.034 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:29.034 [2024-07-25 10:58:36.054659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.293 [2024-07-25 10:58:36.326527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.552 [2024-07-25 10:58:36.652569] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.552 [2024-07-25 10:58:36.652610] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.811 10:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:29.811 10:58:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:29.811 10:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:16:29.811 10:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:29.811 10:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:16:29.811 10:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:16:29.811 10:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:29.811 10:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:29.811 10:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:29.811 10:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:29.811 10:58:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:30.070 malloc1 00:16:30.070 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:30.329 [2024-07-25 10:58:37.304814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:30.329 [2024-07-25 10:58:37.304880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.329 [2024-07-25 10:58:37.304912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:16:30.329 [2024-07-25 10:58:37.304928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.329 [2024-07-25 10:58:37.307719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.329 [2024-07-25 10:58:37.307756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:30.329 pt1 00:16:30.329 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:30.329 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:30.329 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:16:30.329 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:16:30.329 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:30.329 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:30.329 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:16:30.329 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:30.329 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:30.624 malloc2 00:16:30.624 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:30.913 [2024-07-25 10:58:37.800851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:30.913 [2024-07-25 10:58:37.800918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.913 [2024-07-25 10:58:37.800948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:16:30.913 [2024-07-25 10:58:37.800964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.913 [2024-07-25 10:58:37.803778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.913 [2024-07-25 10:58:37.803819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:30.913 pt2 00:16:30.913 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:16:30.913 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:16:30.913 10:58:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:30.913 [2024-07-25 10:58:38.017460] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:30.913 [2024-07-25 10:58:38.019791] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:30.913 [2024-07-25 10:58:38.019994] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:30.913 [2024-07-25 10:58:38.020013] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:30.913 [2024-07-25 10:58:38.020403] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:30.913 [2024-07-25 10:58:38.020646] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:30.913 [2024-07-25 10:58:38.020665] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:30.913 [2024-07-25 10:58:38.020893] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.172 "name": "raid_bdev1", 00:16:31.172 "uuid": "ecb8ccd1-2969-42cb-808b-a45dcabb39ca", 00:16:31.172 "strip_size_kb": 0, 00:16:31.172 "state": "online", 00:16:31.172 "raid_level": "raid1", 00:16:31.172 "superblock": true, 00:16:31.172 "num_base_bdevs": 2, 00:16:31.172 "num_base_bdevs_discovered": 2, 00:16:31.172 "num_base_bdevs_operational": 2, 00:16:31.172 "base_bdevs_list": [ 00:16:31.172 { 00:16:31.172 "name": "pt1", 00:16:31.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.172 "is_configured": true, 00:16:31.172 "data_offset": 2048, 00:16:31.172 "data_size": 63488 00:16:31.172 }, 00:16:31.172 { 00:16:31.172 "name": "pt2", 00:16:31.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.172 "is_configured": true, 00:16:31.172 "data_offset": 2048, 00:16:31.172 "data_size": 63488 00:16:31.172 } 00:16:31.172 ] 00:16:31.172 }' 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.172 10:58:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.739 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:16:31.739 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:31.739 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:31.739 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:31.739 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:31.739 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:31.739 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:31.739 10:58:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:31.997 [2024-07-25 10:58:39.052560] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.997 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:31.997 "name": "raid_bdev1", 00:16:31.997 "aliases": [ 00:16:31.997 "ecb8ccd1-2969-42cb-808b-a45dcabb39ca" 00:16:31.997 ], 00:16:31.997 "product_name": "Raid Volume", 00:16:31.997 "block_size": 512, 00:16:31.997 "num_blocks": 63488, 00:16:31.998 "uuid": "ecb8ccd1-2969-42cb-808b-a45dcabb39ca", 00:16:31.998 "assigned_rate_limits": { 00:16:31.998 "rw_ios_per_sec": 0, 00:16:31.998 "rw_mbytes_per_sec": 0, 00:16:31.998 "r_mbytes_per_sec": 0, 00:16:31.998 "w_mbytes_per_sec": 0 00:16:31.998 }, 00:16:31.998 "claimed": false, 00:16:31.998 "zoned": false, 00:16:31.998 "supported_io_types": { 00:16:31.998 "read": true, 00:16:31.998 "write": true, 00:16:31.998 "unmap": false, 00:16:31.998 "flush": false, 00:16:31.998 "reset": true, 00:16:31.998 "nvme_admin": false, 00:16:31.998 "nvme_io": false, 00:16:31.998 "nvme_io_md": false, 00:16:31.998 "write_zeroes": true, 00:16:31.998 "zcopy": false, 00:16:31.998 "get_zone_info": false, 00:16:31.998 "zone_management": false, 00:16:31.998 "zone_append": false, 00:16:31.998 "compare": false, 00:16:31.998 "compare_and_write": false, 00:16:31.998 "abort": false, 00:16:31.998 "seek_hole": false, 00:16:31.998 "seek_data": false, 00:16:31.998 "copy": false, 00:16:31.998 "nvme_iov_md": false 00:16:31.998 }, 00:16:31.998 "memory_domains": [ 00:16:31.998 { 00:16:31.998 "dma_device_id": "system", 00:16:31.998 "dma_device_type": 1 00:16:31.998 }, 00:16:31.998 { 00:16:31.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.998 "dma_device_type": 2 00:16:31.998 }, 00:16:31.998 { 00:16:31.998 "dma_device_id": "system", 00:16:31.998 "dma_device_type": 1 00:16:31.998 }, 00:16:31.998 { 00:16:31.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.998 "dma_device_type": 2 00:16:31.998 } 00:16:31.998 ], 00:16:31.998 "driver_specific": { 00:16:31.998 "raid": { 00:16:31.998 "uuid": "ecb8ccd1-2969-42cb-808b-a45dcabb39ca", 00:16:31.998 "strip_size_kb": 0, 00:16:31.998 "state": "online", 00:16:31.998 "raid_level": "raid1", 00:16:31.998 "superblock": true, 00:16:31.998 "num_base_bdevs": 2, 00:16:31.998 "num_base_bdevs_discovered": 2, 00:16:31.998 "num_base_bdevs_operational": 2, 00:16:31.998 "base_bdevs_list": [ 00:16:31.998 { 00:16:31.998 "name": "pt1", 00:16:31.998 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:31.998 "is_configured": true, 00:16:31.998 "data_offset": 2048, 00:16:31.998 "data_size": 63488 00:16:31.998 }, 00:16:31.998 { 00:16:31.998 "name": "pt2", 00:16:31.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.998 "is_configured": true, 00:16:31.998 "data_offset": 2048, 00:16:31.998 "data_size": 63488 00:16:31.998 } 00:16:31.998 ] 00:16:31.998 } 00:16:31.998 } 00:16:31.998 }' 00:16:31.998 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.256 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:32.256 pt2' 00:16:32.256 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.256 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:32.256 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.256 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.256 "name": "pt1", 00:16:32.256 "aliases": [ 00:16:32.256 "00000000-0000-0000-0000-000000000001" 00:16:32.256 ], 00:16:32.256 "product_name": "passthru", 00:16:32.256 "block_size": 512, 00:16:32.256 "num_blocks": 65536, 00:16:32.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:32.256 "assigned_rate_limits": { 00:16:32.256 "rw_ios_per_sec": 0, 00:16:32.256 "rw_mbytes_per_sec": 0, 00:16:32.256 "r_mbytes_per_sec": 0, 00:16:32.256 "w_mbytes_per_sec": 0 00:16:32.256 }, 00:16:32.256 "claimed": true, 00:16:32.256 "claim_type": "exclusive_write", 00:16:32.256 "zoned": false, 00:16:32.256 "supported_io_types": { 00:16:32.256 "read": true, 00:16:32.256 "write": true, 00:16:32.256 "unmap": true, 00:16:32.256 "flush": true, 00:16:32.256 "reset": true, 00:16:32.256 "nvme_admin": false, 00:16:32.256 "nvme_io": false, 00:16:32.256 "nvme_io_md": false, 00:16:32.256 "write_zeroes": true, 00:16:32.256 "zcopy": true, 00:16:32.256 "get_zone_info": false, 00:16:32.256 "zone_management": false, 00:16:32.256 "zone_append": false, 00:16:32.256 "compare": false, 00:16:32.256 "compare_and_write": false, 00:16:32.256 "abort": true, 00:16:32.256 "seek_hole": false, 00:16:32.256 "seek_data": false, 00:16:32.256 "copy": true, 00:16:32.256 "nvme_iov_md": false 00:16:32.256 }, 00:16:32.256 "memory_domains": [ 00:16:32.256 { 00:16:32.256 "dma_device_id": "system", 00:16:32.256 "dma_device_type": 1 00:16:32.256 }, 00:16:32.256 { 00:16:32.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.256 "dma_device_type": 2 00:16:32.256 } 00:16:32.256 ], 00:16:32.256 "driver_specific": { 00:16:32.256 "passthru": { 00:16:32.256 "name": "pt1", 00:16:32.256 "base_bdev_name": "malloc1" 00:16:32.256 } 00:16:32.256 } 00:16:32.256 }' 00:16:32.256 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.515 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.515 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.515 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.515 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.515 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.515 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.515 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.515 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.515 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.774 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.774 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.774 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.774 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:32.774 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:33.033 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:33.033 "name": "pt2", 00:16:33.033 "aliases": [ 00:16:33.033 "00000000-0000-0000-0000-000000000002" 00:16:33.033 ], 00:16:33.033 "product_name": "passthru", 00:16:33.033 "block_size": 512, 00:16:33.033 "num_blocks": 65536, 00:16:33.033 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.033 "assigned_rate_limits": { 00:16:33.033 "rw_ios_per_sec": 0, 00:16:33.033 "rw_mbytes_per_sec": 0, 00:16:33.033 "r_mbytes_per_sec": 0, 00:16:33.033 "w_mbytes_per_sec": 0 00:16:33.033 }, 00:16:33.033 "claimed": true, 00:16:33.033 "claim_type": "exclusive_write", 00:16:33.033 "zoned": false, 00:16:33.033 "supported_io_types": { 00:16:33.033 "read": true, 00:16:33.033 "write": true, 00:16:33.033 "unmap": true, 00:16:33.033 "flush": true, 00:16:33.033 "reset": true, 00:16:33.033 "nvme_admin": false, 00:16:33.033 "nvme_io": false, 00:16:33.033 "nvme_io_md": false, 00:16:33.033 "write_zeroes": true, 00:16:33.033 "zcopy": true, 00:16:33.033 "get_zone_info": false, 00:16:33.033 "zone_management": false, 00:16:33.033 "zone_append": false, 00:16:33.033 "compare": false, 00:16:33.033 "compare_and_write": false, 00:16:33.033 "abort": true, 00:16:33.033 "seek_hole": false, 00:16:33.033 "seek_data": false, 00:16:33.033 "copy": true, 00:16:33.033 "nvme_iov_md": false 00:16:33.033 }, 00:16:33.033 "memory_domains": [ 00:16:33.033 { 00:16:33.033 "dma_device_id": "system", 00:16:33.033 "dma_device_type": 1 00:16:33.033 }, 00:16:33.033 { 00:16:33.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.033 "dma_device_type": 2 00:16:33.033 } 00:16:33.033 ], 00:16:33.033 "driver_specific": { 00:16:33.033 "passthru": { 00:16:33.033 "name": "pt2", 00:16:33.033 "base_bdev_name": "malloc2" 00:16:33.033 } 00:16:33.033 } 00:16:33.033 }' 00:16:33.033 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:33.033 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:33.033 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:33.033 10:58:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:33.033 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:33.033 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:33.033 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:33.033 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:33.292 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:33.292 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:33.292 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:33.292 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:33.292 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:33.292 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:16:33.292 [2024-07-25 10:58:40.400280] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.551 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=ecb8ccd1-2969-42cb-808b-a45dcabb39ca 00:16:33.551 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z ecb8ccd1-2969-42cb-808b-a45dcabb39ca ']' 00:16:33.551 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:33.551 [2024-07-25 10:58:40.632518] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:33.551 [2024-07-25 10:58:40.632555] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.551 [2024-07-25 10:58:40.632650] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.551 [2024-07-25 10:58:40.632726] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.551 [2024-07-25 10:58:40.632752] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:33.551 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.551 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:16:33.810 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:16:33.810 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:16:33.810 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:33.810 10:58:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:34.069 10:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:16:34.069 10:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:34.328 10:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:34.328 10:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:16:34.587 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:34.846 [2024-07-25 10:58:41.763537] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:34.846 [2024-07-25 10:58:41.765872] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:34.846 [2024-07-25 10:58:41.765945] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:34.846 [2024-07-25 10:58:41.766003] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:34.846 [2024-07-25 10:58:41.766027] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:34.846 [2024-07-25 10:58:41.766044] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:34.846 request: 00:16:34.846 { 00:16:34.846 "name": "raid_bdev1", 00:16:34.846 "raid_level": "raid1", 00:16:34.846 "base_bdevs": [ 00:16:34.846 "malloc1", 00:16:34.846 "malloc2" 00:16:34.846 ], 00:16:34.846 "superblock": false, 00:16:34.846 "method": "bdev_raid_create", 00:16:34.846 "req_id": 1 00:16:34.846 } 00:16:34.846 Got JSON-RPC error response 00:16:34.846 response: 00:16:34.846 { 00:16:34.846 "code": -17, 00:16:34.846 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:34.846 } 00:16:34.846 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:34.846 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:34.846 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:34.846 10:58:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:34.846 10:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.846 10:58:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:16:35.105 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:16:35.105 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:16:35.105 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:35.105 [2024-07-25 10:58:42.216678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:35.105 [2024-07-25 10:58:42.216755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.105 [2024-07-25 10:58:42.216779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:16:35.105 [2024-07-25 10:58:42.216797] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.105 [2024-07-25 10:58:42.219632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.105 [2024-07-25 10:58:42.219670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:35.105 [2024-07-25 10:58:42.219768] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:35.105 [2024-07-25 10:58:42.219867] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:35.105 pt1 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.364 "name": "raid_bdev1", 00:16:35.364 "uuid": "ecb8ccd1-2969-42cb-808b-a45dcabb39ca", 00:16:35.364 "strip_size_kb": 0, 00:16:35.364 "state": "configuring", 00:16:35.364 "raid_level": "raid1", 00:16:35.364 "superblock": true, 00:16:35.364 "num_base_bdevs": 2, 00:16:35.364 "num_base_bdevs_discovered": 1, 00:16:35.364 "num_base_bdevs_operational": 2, 00:16:35.364 "base_bdevs_list": [ 00:16:35.364 { 00:16:35.364 "name": "pt1", 00:16:35.364 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.364 "is_configured": true, 00:16:35.364 "data_offset": 2048, 00:16:35.364 "data_size": 63488 00:16:35.364 }, 00:16:35.364 { 00:16:35.364 "name": null, 00:16:35.364 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.364 "is_configured": false, 00:16:35.364 "data_offset": 2048, 00:16:35.364 "data_size": 63488 00:16:35.364 } 00:16:35.364 ] 00:16:35.364 }' 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.364 10:58:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.931 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:16:35.931 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:16:35.931 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:35.931 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:36.190 [2024-07-25 10:58:43.235422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:36.190 [2024-07-25 10:58:43.235501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.190 [2024-07-25 10:58:43.235528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:16:36.190 [2024-07-25 10:58:43.235547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.190 [2024-07-25 10:58:43.236133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.190 [2024-07-25 10:58:43.236179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:36.190 [2024-07-25 10:58:43.236284] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:36.190 [2024-07-25 10:58:43.236327] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:36.190 [2024-07-25 10:58:43.236509] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:36.190 [2024-07-25 10:58:43.236528] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:36.190 [2024-07-25 10:58:43.236845] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:16:36.190 [2024-07-25 10:58:43.237085] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:36.190 [2024-07-25 10:58:43.237099] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:36.190 [2024-07-25 10:58:43.237312] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.190 pt2 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.190 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.447 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.447 "name": "raid_bdev1", 00:16:36.447 "uuid": "ecb8ccd1-2969-42cb-808b-a45dcabb39ca", 00:16:36.447 "strip_size_kb": 0, 00:16:36.447 "state": "online", 00:16:36.447 "raid_level": "raid1", 00:16:36.447 "superblock": true, 00:16:36.447 "num_base_bdevs": 2, 00:16:36.447 "num_base_bdevs_discovered": 2, 00:16:36.447 "num_base_bdevs_operational": 2, 00:16:36.447 "base_bdevs_list": [ 00:16:36.447 { 00:16:36.447 "name": "pt1", 00:16:36.447 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.447 "is_configured": true, 00:16:36.447 "data_offset": 2048, 00:16:36.447 "data_size": 63488 00:16:36.447 }, 00:16:36.447 { 00:16:36.447 "name": "pt2", 00:16:36.447 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.447 "is_configured": true, 00:16:36.447 "data_offset": 2048, 00:16:36.447 "data_size": 63488 00:16:36.447 } 00:16:36.447 ] 00:16:36.447 }' 00:16:36.447 10:58:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.447 10:58:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.014 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:16:37.014 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:37.014 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:37.014 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:37.014 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:37.014 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:37.014 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:37.014 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:37.273 [2024-07-25 10:58:44.294637] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.273 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:37.273 "name": "raid_bdev1", 00:16:37.273 "aliases": [ 00:16:37.273 "ecb8ccd1-2969-42cb-808b-a45dcabb39ca" 00:16:37.273 ], 00:16:37.273 "product_name": "Raid Volume", 00:16:37.273 "block_size": 512, 00:16:37.273 "num_blocks": 63488, 00:16:37.273 "uuid": "ecb8ccd1-2969-42cb-808b-a45dcabb39ca", 00:16:37.273 "assigned_rate_limits": { 00:16:37.273 "rw_ios_per_sec": 0, 00:16:37.273 "rw_mbytes_per_sec": 0, 00:16:37.273 "r_mbytes_per_sec": 0, 00:16:37.273 "w_mbytes_per_sec": 0 00:16:37.273 }, 00:16:37.273 "claimed": false, 00:16:37.273 "zoned": false, 00:16:37.273 "supported_io_types": { 00:16:37.273 "read": true, 00:16:37.273 "write": true, 00:16:37.273 "unmap": false, 00:16:37.273 "flush": false, 00:16:37.273 "reset": true, 00:16:37.273 "nvme_admin": false, 00:16:37.273 "nvme_io": false, 00:16:37.273 "nvme_io_md": false, 00:16:37.273 "write_zeroes": true, 00:16:37.273 "zcopy": false, 00:16:37.273 "get_zone_info": false, 00:16:37.273 "zone_management": false, 00:16:37.273 "zone_append": false, 00:16:37.273 "compare": false, 00:16:37.273 "compare_and_write": false, 00:16:37.273 "abort": false, 00:16:37.273 "seek_hole": false, 00:16:37.273 "seek_data": false, 00:16:37.273 "copy": false, 00:16:37.273 "nvme_iov_md": false 00:16:37.273 }, 00:16:37.273 "memory_domains": [ 00:16:37.273 { 00:16:37.273 "dma_device_id": "system", 00:16:37.273 "dma_device_type": 1 00:16:37.273 }, 00:16:37.273 { 00:16:37.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.273 "dma_device_type": 2 00:16:37.273 }, 00:16:37.273 { 00:16:37.273 "dma_device_id": "system", 00:16:37.273 "dma_device_type": 1 00:16:37.273 }, 00:16:37.273 { 00:16:37.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.273 "dma_device_type": 2 00:16:37.273 } 00:16:37.273 ], 00:16:37.273 "driver_specific": { 00:16:37.273 "raid": { 00:16:37.273 "uuid": "ecb8ccd1-2969-42cb-808b-a45dcabb39ca", 00:16:37.273 "strip_size_kb": 0, 00:16:37.273 "state": "online", 00:16:37.273 "raid_level": "raid1", 00:16:37.273 "superblock": true, 00:16:37.273 "num_base_bdevs": 2, 00:16:37.273 "num_base_bdevs_discovered": 2, 00:16:37.273 "num_base_bdevs_operational": 2, 00:16:37.273 "base_bdevs_list": [ 00:16:37.273 { 00:16:37.273 "name": "pt1", 00:16:37.273 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.273 "is_configured": true, 00:16:37.273 "data_offset": 2048, 00:16:37.273 "data_size": 63488 00:16:37.273 }, 00:16:37.273 { 00:16:37.273 "name": "pt2", 00:16:37.273 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.273 "is_configured": true, 00:16:37.273 "data_offset": 2048, 00:16:37.273 "data_size": 63488 00:16:37.273 } 00:16:37.273 ] 00:16:37.273 } 00:16:37.273 } 00:16:37.273 }' 00:16:37.273 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:37.273 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:37.273 pt2' 00:16:37.273 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:37.273 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:37.273 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:37.532 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:37.532 "name": "pt1", 00:16:37.532 "aliases": [ 00:16:37.532 "00000000-0000-0000-0000-000000000001" 00:16:37.532 ], 00:16:37.532 "product_name": "passthru", 00:16:37.532 "block_size": 512, 00:16:37.532 "num_blocks": 65536, 00:16:37.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:37.532 "assigned_rate_limits": { 00:16:37.532 "rw_ios_per_sec": 0, 00:16:37.532 "rw_mbytes_per_sec": 0, 00:16:37.532 "r_mbytes_per_sec": 0, 00:16:37.532 "w_mbytes_per_sec": 0 00:16:37.532 }, 00:16:37.532 "claimed": true, 00:16:37.532 "claim_type": "exclusive_write", 00:16:37.532 "zoned": false, 00:16:37.532 "supported_io_types": { 00:16:37.532 "read": true, 00:16:37.532 "write": true, 00:16:37.532 "unmap": true, 00:16:37.532 "flush": true, 00:16:37.532 "reset": true, 00:16:37.532 "nvme_admin": false, 00:16:37.532 "nvme_io": false, 00:16:37.532 "nvme_io_md": false, 00:16:37.532 "write_zeroes": true, 00:16:37.532 "zcopy": true, 00:16:37.532 "get_zone_info": false, 00:16:37.532 "zone_management": false, 00:16:37.532 "zone_append": false, 00:16:37.532 "compare": false, 00:16:37.532 "compare_and_write": false, 00:16:37.532 "abort": true, 00:16:37.532 "seek_hole": false, 00:16:37.532 "seek_data": false, 00:16:37.532 "copy": true, 00:16:37.532 "nvme_iov_md": false 00:16:37.532 }, 00:16:37.532 "memory_domains": [ 00:16:37.532 { 00:16:37.532 "dma_device_id": "system", 00:16:37.532 "dma_device_type": 1 00:16:37.532 }, 00:16:37.532 { 00:16:37.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.532 "dma_device_type": 2 00:16:37.532 } 00:16:37.532 ], 00:16:37.533 "driver_specific": { 00:16:37.533 "passthru": { 00:16:37.533 "name": "pt1", 00:16:37.533 "base_bdev_name": "malloc1" 00:16:37.533 } 00:16:37.533 } 00:16:37.533 }' 00:16:37.533 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.533 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.791 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:37.791 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.791 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.791 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:37.791 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.791 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.791 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:37.791 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.791 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:38.050 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:38.050 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:38.050 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:38.050 10:58:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:38.050 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:38.050 "name": "pt2", 00:16:38.050 "aliases": [ 00:16:38.050 "00000000-0000-0000-0000-000000000002" 00:16:38.050 ], 00:16:38.050 "product_name": "passthru", 00:16:38.050 "block_size": 512, 00:16:38.050 "num_blocks": 65536, 00:16:38.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.050 "assigned_rate_limits": { 00:16:38.050 "rw_ios_per_sec": 0, 00:16:38.050 "rw_mbytes_per_sec": 0, 00:16:38.050 "r_mbytes_per_sec": 0, 00:16:38.050 "w_mbytes_per_sec": 0 00:16:38.050 }, 00:16:38.050 "claimed": true, 00:16:38.050 "claim_type": "exclusive_write", 00:16:38.050 "zoned": false, 00:16:38.050 "supported_io_types": { 00:16:38.050 "read": true, 00:16:38.050 "write": true, 00:16:38.050 "unmap": true, 00:16:38.050 "flush": true, 00:16:38.050 "reset": true, 00:16:38.050 "nvme_admin": false, 00:16:38.050 "nvme_io": false, 00:16:38.050 "nvme_io_md": false, 00:16:38.050 "write_zeroes": true, 00:16:38.050 "zcopy": true, 00:16:38.050 "get_zone_info": false, 00:16:38.050 "zone_management": false, 00:16:38.050 "zone_append": false, 00:16:38.050 "compare": false, 00:16:38.050 "compare_and_write": false, 00:16:38.050 "abort": true, 00:16:38.050 "seek_hole": false, 00:16:38.050 "seek_data": false, 00:16:38.050 "copy": true, 00:16:38.050 "nvme_iov_md": false 00:16:38.050 }, 00:16:38.050 "memory_domains": [ 00:16:38.050 { 00:16:38.050 "dma_device_id": "system", 00:16:38.050 "dma_device_type": 1 00:16:38.050 }, 00:16:38.050 { 00:16:38.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.050 "dma_device_type": 2 00:16:38.050 } 00:16:38.050 ], 00:16:38.050 "driver_specific": { 00:16:38.050 "passthru": { 00:16:38.050 "name": "pt2", 00:16:38.050 "base_bdev_name": "malloc2" 00:16:38.050 } 00:16:38.050 } 00:16:38.050 }' 00:16:38.050 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:38.309 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:38.309 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:38.309 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:38.309 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:38.309 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:38.309 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:38.309 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:38.309 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:38.309 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:38.568 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:38.568 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:38.568 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:38.568 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:16:38.827 [2024-07-25 10:58:45.698421] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.827 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' ecb8ccd1-2969-42cb-808b-a45dcabb39ca '!=' ecb8ccd1-2969-42cb-808b-a45dcabb39ca ']' 00:16:38.827 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:16:38.827 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:38.827 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:38.827 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:38.827 [2024-07-25 10:58:45.930724] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:39.086 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:39.086 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:39.086 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:39.086 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:39.086 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:39.086 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:39.086 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.086 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.086 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.086 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.086 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.086 10:58:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.086 10:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.086 "name": "raid_bdev1", 00:16:39.086 "uuid": "ecb8ccd1-2969-42cb-808b-a45dcabb39ca", 00:16:39.086 "strip_size_kb": 0, 00:16:39.086 "state": "online", 00:16:39.086 "raid_level": "raid1", 00:16:39.086 "superblock": true, 00:16:39.086 "num_base_bdevs": 2, 00:16:39.086 "num_base_bdevs_discovered": 1, 00:16:39.086 "num_base_bdevs_operational": 1, 00:16:39.086 "base_bdevs_list": [ 00:16:39.086 { 00:16:39.086 "name": null, 00:16:39.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.086 "is_configured": false, 00:16:39.086 "data_offset": 2048, 00:16:39.086 "data_size": 63488 00:16:39.086 }, 00:16:39.086 { 00:16:39.086 "name": "pt2", 00:16:39.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.086 "is_configured": true, 00:16:39.086 "data_offset": 2048, 00:16:39.086 "data_size": 63488 00:16:39.086 } 00:16:39.086 ] 00:16:39.086 }' 00:16:39.086 10:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.086 10:58:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.654 10:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:39.913 [2024-07-25 10:58:46.949492] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.913 [2024-07-25 10:58:46.949528] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.913 [2024-07-25 10:58:46.949620] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.913 [2024-07-25 10:58:46.949678] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.913 [2024-07-25 10:58:46.949697] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:39.913 10:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.913 10:58:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:16:40.172 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:16:40.172 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:16:40.172 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:16:40.172 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:16:40.172 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:40.431 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:16:40.431 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:16:40.431 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:16:40.431 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:16:40.431 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=1 00:16:40.431 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.690 [2024-07-25 10:58:47.627277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.690 [2024-07-25 10:58:47.627351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.690 [2024-07-25 10:58:47.627376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:16:40.690 [2024-07-25 10:58:47.627393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.690 [2024-07-25 10:58:47.630219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.690 [2024-07-25 10:58:47.630259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.690 [2024-07-25 10:58:47.630355] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:40.690 [2024-07-25 10:58:47.630424] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.690 [2024-07-25 10:58:47.630576] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:40.690 [2024-07-25 10:58:47.630594] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:40.690 [2024-07-25 10:58:47.630903] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:16:40.690 [2024-07-25 10:58:47.631131] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:40.690 [2024-07-25 10:58:47.631154] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:40.690 [2024-07-25 10:58:47.631369] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.690 pt2 00:16:40.690 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.690 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:40.690 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:40.690 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:40.690 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:40.690 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:40.690 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.690 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.690 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.690 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.690 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.690 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.949 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.949 "name": "raid_bdev1", 00:16:40.949 "uuid": "ecb8ccd1-2969-42cb-808b-a45dcabb39ca", 00:16:40.949 "strip_size_kb": 0, 00:16:40.949 "state": "online", 00:16:40.949 "raid_level": "raid1", 00:16:40.949 "superblock": true, 00:16:40.949 "num_base_bdevs": 2, 00:16:40.949 "num_base_bdevs_discovered": 1, 00:16:40.949 "num_base_bdevs_operational": 1, 00:16:40.949 "base_bdevs_list": [ 00:16:40.949 { 00:16:40.949 "name": null, 00:16:40.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.949 "is_configured": false, 00:16:40.949 "data_offset": 2048, 00:16:40.949 "data_size": 63488 00:16:40.949 }, 00:16:40.949 { 00:16:40.949 "name": "pt2", 00:16:40.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.949 "is_configured": true, 00:16:40.949 "data_offset": 2048, 00:16:40.949 "data_size": 63488 00:16:40.949 } 00:16:40.949 ] 00:16:40.949 }' 00:16:40.949 10:58:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.949 10:58:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.517 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:41.517 [2024-07-25 10:58:48.585883] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.517 [2024-07-25 10:58:48.585922] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.517 [2024-07-25 10:58:48.586014] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.517 [2024-07-25 10:58:48.586079] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.517 [2024-07-25 10:58:48.586095] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:41.517 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:16:41.517 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.776 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:16:41.776 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:16:41.776 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:16:41.776 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:42.036 [2024-07-25 10:58:48.926779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:42.036 [2024-07-25 10:58:48.926844] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.036 [2024-07-25 10:58:48.926872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:16:42.036 [2024-07-25 10:58:48.926887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.036 [2024-07-25 10:58:48.929713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.036 [2024-07-25 10:58:48.929749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:42.036 [2024-07-25 10:58:48.929845] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:42.036 [2024-07-25 10:58:48.929933] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:42.036 [2024-07-25 10:58:48.930147] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:42.036 [2024-07-25 10:58:48.930165] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.036 [2024-07-25 10:58:48.930192] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:42.036 [2024-07-25 10:58:48.930266] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:42.036 [2024-07-25 10:58:48.930351] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:42.036 [2024-07-25 10:58:48.930366] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:42.036 [2024-07-25 10:58:48.930669] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:16:42.036 [2024-07-25 10:58:48.930884] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:42.036 [2024-07-25 10:58:48.930902] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:42.036 [2024-07-25 10:58:48.931147] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.036 pt1 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.036 10:58:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.036 10:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.036 "name": "raid_bdev1", 00:16:42.036 "uuid": "ecb8ccd1-2969-42cb-808b-a45dcabb39ca", 00:16:42.036 "strip_size_kb": 0, 00:16:42.036 "state": "online", 00:16:42.036 "raid_level": "raid1", 00:16:42.036 "superblock": true, 00:16:42.036 "num_base_bdevs": 2, 00:16:42.036 "num_base_bdevs_discovered": 1, 00:16:42.036 "num_base_bdevs_operational": 1, 00:16:42.036 "base_bdevs_list": [ 00:16:42.036 { 00:16:42.036 "name": null, 00:16:42.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.036 "is_configured": false, 00:16:42.036 "data_offset": 2048, 00:16:42.036 "data_size": 63488 00:16:42.036 }, 00:16:42.036 { 00:16:42.036 "name": "pt2", 00:16:42.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.036 "is_configured": true, 00:16:42.036 "data_offset": 2048, 00:16:42.036 "data_size": 63488 00:16:42.036 } 00:16:42.036 ] 00:16:42.036 }' 00:16:42.036 10:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.036 10:58:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.602 10:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:42.602 10:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:42.860 10:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:16:42.860 10:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:42.860 10:58:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:16:43.119 [2024-07-25 10:58:50.134656] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:43.119 10:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' ecb8ccd1-2969-42cb-808b-a45dcabb39ca '!=' ecb8ccd1-2969-42cb-808b-a45dcabb39ca ']' 00:16:43.119 10:58:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3574308 00:16:43.119 10:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3574308 ']' 00:16:43.119 10:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3574308 00:16:43.119 10:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:43.119 10:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:43.119 10:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3574308 00:16:43.119 10:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:43.119 10:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:43.119 10:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3574308' 00:16:43.119 killing process with pid 3574308 00:16:43.119 10:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3574308 00:16:43.119 [2024-07-25 10:58:50.211257] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:43.119 [2024-07-25 10:58:50.211361] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.119 10:58:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3574308 00:16:43.119 [2024-07-25 10:58:50.211419] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.119 [2024-07-25 10:58:50.211439] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:43.378 [2024-07-25 10:58:50.405614] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:45.336 10:58:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:16:45.336 00:16:45.336 real 0m16.389s 00:16:45.336 user 0m28.001s 00:16:45.336 sys 0m2.819s 00:16:45.336 10:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:45.336 10:58:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.336 ************************************ 00:16:45.336 END TEST raid_superblock_test 00:16:45.336 ************************************ 00:16:45.336 10:58:52 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:16:45.336 10:58:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:45.336 10:58:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:45.336 10:58:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:45.336 ************************************ 00:16:45.336 START TEST raid_read_error_test 00:16:45.336 ************************************ 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.KQYcG6qwRw 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3577356 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3577356 /var/tmp/spdk-raid.sock 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3577356 ']' 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:45.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:45.336 10:58:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.336 [2024-07-25 10:58:52.316845] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:45.336 [2024-07-25 10:58:52.316964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3577356 ] 00:16:45.601 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.601 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:45.602 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:45.602 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:45.602 [2024-07-25 10:58:52.542615] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.861 [2024-07-25 10:58:52.816180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.120 [2024-07-25 10:58:53.114183] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.120 [2024-07-25 10:58:53.114220] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:46.400 10:58:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.400 10:58:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:46.400 10:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:46.400 10:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:46.671 BaseBdev1_malloc 00:16:46.671 10:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:46.671 true 00:16:46.671 10:58:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:46.930 [2024-07-25 10:58:53.994779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:46.930 [2024-07-25 10:58:53.994839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:46.930 [2024-07-25 10:58:53.994865] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:16:46.930 [2024-07-25 10:58:53.994891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:46.930 [2024-07-25 10:58:53.997659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:46.930 [2024-07-25 10:58:53.997697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:46.930 BaseBdev1 00:16:46.930 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:46.930 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:47.188 BaseBdev2_malloc 00:16:47.188 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:47.447 true 00:16:47.447 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:47.706 [2024-07-25 10:58:54.731799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:47.706 [2024-07-25 10:58:54.731860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.706 [2024-07-25 10:58:54.731885] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:16:47.706 [2024-07-25 10:58:54.731906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.706 [2024-07-25 10:58:54.734661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.706 [2024-07-25 10:58:54.734699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:47.706 BaseBdev2 00:16:47.706 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:47.965 [2024-07-25 10:58:54.944431] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:47.965 [2024-07-25 10:58:54.946778] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:47.965 [2024-07-25 10:58:54.947015] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:47.965 [2024-07-25 10:58:54.947037] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:47.965 [2024-07-25 10:58:54.947399] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:47.965 [2024-07-25 10:58:54.947665] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:47.965 [2024-07-25 10:58:54.947683] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:47.965 [2024-07-25 10:58:54.947924] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.965 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:47.965 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:47.965 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:47.965 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:47.965 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:47.965 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:47.965 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.965 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.965 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.965 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.965 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.965 10:58:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.224 10:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.224 "name": "raid_bdev1", 00:16:48.224 "uuid": "6a8f67ae-e59a-4f41-9fe8-d27637e1c38e", 00:16:48.224 "strip_size_kb": 0, 00:16:48.224 "state": "online", 00:16:48.224 "raid_level": "raid1", 00:16:48.224 "superblock": true, 00:16:48.224 "num_base_bdevs": 2, 00:16:48.224 "num_base_bdevs_discovered": 2, 00:16:48.224 "num_base_bdevs_operational": 2, 00:16:48.224 "base_bdevs_list": [ 00:16:48.224 { 00:16:48.224 "name": "BaseBdev1", 00:16:48.224 "uuid": "cea840d5-f9d3-53f6-ae1f-a15b15bdbb9d", 00:16:48.224 "is_configured": true, 00:16:48.224 "data_offset": 2048, 00:16:48.224 "data_size": 63488 00:16:48.224 }, 00:16:48.224 { 00:16:48.224 "name": "BaseBdev2", 00:16:48.224 "uuid": "86b0b738-40f1-5225-a04e-4b8b9793c118", 00:16:48.224 "is_configured": true, 00:16:48.224 "data_offset": 2048, 00:16:48.224 "data_size": 63488 00:16:48.224 } 00:16:48.224 ] 00:16:48.224 }' 00:16:48.224 10:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.224 10:58:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.792 10:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:48.792 10:58:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:48.792 [2024-07-25 10:58:55.860628] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:16:49.729 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=2 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.988 10:58:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.247 10:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.247 "name": "raid_bdev1", 00:16:50.247 "uuid": "6a8f67ae-e59a-4f41-9fe8-d27637e1c38e", 00:16:50.247 "strip_size_kb": 0, 00:16:50.247 "state": "online", 00:16:50.247 "raid_level": "raid1", 00:16:50.247 "superblock": true, 00:16:50.247 "num_base_bdevs": 2, 00:16:50.247 "num_base_bdevs_discovered": 2, 00:16:50.247 "num_base_bdevs_operational": 2, 00:16:50.247 "base_bdevs_list": [ 00:16:50.247 { 00:16:50.247 "name": "BaseBdev1", 00:16:50.247 "uuid": "cea840d5-f9d3-53f6-ae1f-a15b15bdbb9d", 00:16:50.247 "is_configured": true, 00:16:50.247 "data_offset": 2048, 00:16:50.247 "data_size": 63488 00:16:50.247 }, 00:16:50.247 { 00:16:50.247 "name": "BaseBdev2", 00:16:50.247 "uuid": "86b0b738-40f1-5225-a04e-4b8b9793c118", 00:16:50.247 "is_configured": true, 00:16:50.247 "data_offset": 2048, 00:16:50.247 "data_size": 63488 00:16:50.247 } 00:16:50.247 ] 00:16:50.247 }' 00:16:50.247 10:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.247 10:58:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.815 10:58:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:51.075 [2024-07-25 10:58:58.014563] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.075 [2024-07-25 10:58:58.014608] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.075 [2024-07-25 10:58:58.017830] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.075 [2024-07-25 10:58:58.017885] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.075 [2024-07-25 10:58:58.017983] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.075 [2024-07-25 10:58:58.018004] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:51.075 0 00:16:51.075 10:58:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3577356 00:16:51.075 10:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3577356 ']' 00:16:51.075 10:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3577356 00:16:51.075 10:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:16:51.075 10:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:51.075 10:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3577356 00:16:51.075 10:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:51.075 10:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:51.075 10:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3577356' 00:16:51.075 killing process with pid 3577356 00:16:51.075 10:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3577356 00:16:51.075 [2024-07-25 10:58:58.088287] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.075 10:58:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3577356 00:16:51.075 [2024-07-25 10:58:58.188334] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.980 10:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.KQYcG6qwRw 00:16:52.980 10:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:16:52.980 10:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:16:52.980 10:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:16:52.980 10:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:16:52.980 10:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:52.980 10:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:52.980 10:58:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:52.980 00:16:52.980 real 0m7.760s 00:16:52.980 user 0m10.829s 00:16:52.980 sys 0m1.235s 00:16:52.980 10:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.980 10:58:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.980 ************************************ 00:16:52.980 END TEST raid_read_error_test 00:16:52.980 ************************************ 00:16:52.980 10:59:00 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:16:52.980 10:59:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:52.980 10:59:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.980 10:59:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.980 ************************************ 00:16:52.980 START TEST raid_write_error_test 00:16:52.980 ************************************ 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=2 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.9SI5qll4e9 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3578771 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3578771 /var/tmp/spdk-raid.sock 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3578771 ']' 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:52.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.980 10:59:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.240 [2024-07-25 10:59:00.165038] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:53.240 [2024-07-25 10:59:00.165175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3578771 ] 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:01.0 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:01.1 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:01.2 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:01.3 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:01.4 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:01.5 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:01.6 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:01.7 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:02.0 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:02.1 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:02.2 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:02.3 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:02.4 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:02.5 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:02.6 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3d:02.7 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:01.0 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:01.1 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:01.2 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:01.3 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:01.4 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:01.5 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:01.6 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:01.7 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:02.0 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:02.1 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:02.2 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:02.3 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:02.4 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:02.5 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:02.6 cannot be used 00:16:53.240 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:16:53.240 EAL: Requested device 0000:3f:02.7 cannot be used 00:16:53.499 [2024-07-25 10:59:00.391221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.758 [2024-07-25 10:59:00.657385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.017 [2024-07-25 10:59:01.018498] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.017 [2024-07-25 10:59:01.018535] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.276 10:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:54.276 10:59:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:16:54.276 10:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:54.276 10:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:54.535 BaseBdev1_malloc 00:16:54.535 10:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:54.793 true 00:16:54.793 10:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:55.052 [2024-07-25 10:59:01.925324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:55.052 [2024-07-25 10:59:01.925385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.052 [2024-07-25 10:59:01.925412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:16:55.052 [2024-07-25 10:59:01.925434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.052 [2024-07-25 10:59:01.928250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.052 [2024-07-25 10:59:01.928288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:55.052 BaseBdev1 00:16:55.052 10:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:16:55.052 10:59:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:55.311 BaseBdev2_malloc 00:16:55.311 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:55.311 true 00:16:55.570 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:55.570 [2024-07-25 10:59:02.647384] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:55.570 [2024-07-25 10:59:02.647443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.570 [2024-07-25 10:59:02.647468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:16:55.570 [2024-07-25 10:59:02.647489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.570 [2024-07-25 10:59:02.650259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.570 [2024-07-25 10:59:02.650296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:55.570 BaseBdev2 00:16:55.570 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:55.829 [2024-07-25 10:59:02.872048] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.829 [2024-07-25 10:59:02.874423] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.829 [2024-07-25 10:59:02.874659] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:55.829 [2024-07-25 10:59:02.874681] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:55.829 [2024-07-25 10:59:02.875021] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:16:55.829 [2024-07-25 10:59:02.875287] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:55.829 [2024-07-25 10:59:02.875305] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:55.829 [2024-07-25 10:59:02.875530] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.829 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:55.829 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:55.829 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:55.829 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:55.829 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:55.829 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:55.829 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.829 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.829 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.829 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.829 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.829 10:59:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.088 10:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.088 "name": "raid_bdev1", 00:16:56.088 "uuid": "d743d9c9-5954-4d6b-aeb6-88bd066723fa", 00:16:56.088 "strip_size_kb": 0, 00:16:56.088 "state": "online", 00:16:56.088 "raid_level": "raid1", 00:16:56.088 "superblock": true, 00:16:56.088 "num_base_bdevs": 2, 00:16:56.088 "num_base_bdevs_discovered": 2, 00:16:56.088 "num_base_bdevs_operational": 2, 00:16:56.088 "base_bdevs_list": [ 00:16:56.088 { 00:16:56.088 "name": "BaseBdev1", 00:16:56.088 "uuid": "2d83bec9-fe63-553a-ba6a-94cf5dda90d7", 00:16:56.088 "is_configured": true, 00:16:56.088 "data_offset": 2048, 00:16:56.088 "data_size": 63488 00:16:56.088 }, 00:16:56.088 { 00:16:56.088 "name": "BaseBdev2", 00:16:56.088 "uuid": "0b1b43dc-408d-5192-a28a-b45327206a59", 00:16:56.088 "is_configured": true, 00:16:56.088 "data_offset": 2048, 00:16:56.088 "data_size": 63488 00:16:56.088 } 00:16:56.088 ] 00:16:56.088 }' 00:16:56.088 10:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.088 10:59:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.656 10:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:16:56.656 10:59:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:56.915 [2024-07-25 10:59:03.804287] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:57.852 [2024-07-25 10:59:04.915209] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:57.852 [2024-07-25 10:59:04.915278] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:57.852 [2024-07-25 10:59:04.915482] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010710 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=1 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.852 10:59:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.112 10:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:58.112 "name": "raid_bdev1", 00:16:58.112 "uuid": "d743d9c9-5954-4d6b-aeb6-88bd066723fa", 00:16:58.112 "strip_size_kb": 0, 00:16:58.112 "state": "online", 00:16:58.112 "raid_level": "raid1", 00:16:58.112 "superblock": true, 00:16:58.112 "num_base_bdevs": 2, 00:16:58.112 "num_base_bdevs_discovered": 1, 00:16:58.112 "num_base_bdevs_operational": 1, 00:16:58.112 "base_bdevs_list": [ 00:16:58.112 { 00:16:58.112 "name": null, 00:16:58.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.112 "is_configured": false, 00:16:58.112 "data_offset": 2048, 00:16:58.112 "data_size": 63488 00:16:58.112 }, 00:16:58.112 { 00:16:58.112 "name": "BaseBdev2", 00:16:58.112 "uuid": "0b1b43dc-408d-5192-a28a-b45327206a59", 00:16:58.112 "is_configured": true, 00:16:58.112 "data_offset": 2048, 00:16:58.112 "data_size": 63488 00:16:58.112 } 00:16:58.112 ] 00:16:58.112 }' 00:16:58.112 10:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:58.112 10:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.681 10:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:58.940 [2024-07-25 10:59:05.959193] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:58.940 [2024-07-25 10:59:05.959231] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.940 [2024-07-25 10:59:05.962425] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.940 [2024-07-25 10:59:05.962465] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.940 [2024-07-25 10:59:05.962535] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.940 [2024-07-25 10:59:05.962551] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:58.940 0 00:16:58.940 10:59:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3578771 00:16:58.940 10:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3578771 ']' 00:16:58.940 10:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3578771 00:16:58.940 10:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:16:58.940 10:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.940 10:59:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3578771 00:16:58.940 10:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:58.940 10:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:58.940 10:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3578771' 00:16:58.940 killing process with pid 3578771 00:16:58.940 10:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3578771 00:16:58.940 [2024-07-25 10:59:06.032827] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.940 10:59:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3578771 00:16:59.199 [2024-07-25 10:59:06.139025] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.169 10:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.9SI5qll4e9 00:17:01.169 10:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:17:01.169 10:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:17:01.169 10:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:17:01.169 10:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:17:01.169 10:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:01.169 10:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:01.169 10:59:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:01.169 00:17:01.169 real 0m7.869s 00:17:01.169 user 0m10.950s 00:17:01.169 sys 0m1.224s 00:17:01.169 10:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.169 10:59:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.169 ************************************ 00:17:01.169 END TEST raid_write_error_test 00:17:01.169 ************************************ 00:17:01.169 10:59:07 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:17:01.169 10:59:07 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:17:01.169 10:59:07 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:17:01.169 10:59:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:01.169 10:59:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.169 10:59:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.169 ************************************ 00:17:01.169 START TEST raid_state_function_test 00:17:01.169 ************************************ 00:17:01.169 10:59:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:17:01.169 10:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:01.169 10:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:01.169 10:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:01.169 10:59:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3580186 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3580186' 00:17:01.169 Process raid pid: 3580186 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3580186 /var/tmp/spdk-raid.sock 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3580186 ']' 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:01.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:01.169 10:59:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.169 [2024-07-25 10:59:08.109639] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:01.169 [2024-07-25 10:59:08.109753] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:01.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.169 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:01.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.170 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:01.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.170 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:01.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.170 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:01.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.170 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:01.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.170 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:01.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.170 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:01.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.170 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:01.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.170 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:01.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:01.170 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:01.428 [2024-07-25 10:59:08.337653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.686 [2024-07-25 10:59:08.604284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.943 [2024-07-25 10:59:08.974394] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.943 [2024-07-25 10:59:08.974435] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.201 10:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:02.201 10:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:17:02.201 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:02.459 [2024-07-25 10:59:09.374805] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:02.459 [2024-07-25 10:59:09.374866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:02.459 [2024-07-25 10:59:09.374881] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.460 [2024-07-25 10:59:09.374898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.460 [2024-07-25 10:59:09.374909] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:02.460 [2024-07-25 10:59:09.374924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:02.460 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:02.460 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:02.460 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:02.460 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:02.460 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:02.460 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:02.460 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.460 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.460 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.460 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.460 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.460 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.718 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.718 "name": "Existed_Raid", 00:17:02.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.718 "strip_size_kb": 64, 00:17:02.718 "state": "configuring", 00:17:02.718 "raid_level": "raid0", 00:17:02.718 "superblock": false, 00:17:02.718 "num_base_bdevs": 3, 00:17:02.718 "num_base_bdevs_discovered": 0, 00:17:02.718 "num_base_bdevs_operational": 3, 00:17:02.718 "base_bdevs_list": [ 00:17:02.718 { 00:17:02.718 "name": "BaseBdev1", 00:17:02.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.718 "is_configured": false, 00:17:02.718 "data_offset": 0, 00:17:02.718 "data_size": 0 00:17:02.718 }, 00:17:02.718 { 00:17:02.718 "name": "BaseBdev2", 00:17:02.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.718 "is_configured": false, 00:17:02.718 "data_offset": 0, 00:17:02.718 "data_size": 0 00:17:02.718 }, 00:17:02.718 { 00:17:02.718 "name": "BaseBdev3", 00:17:02.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.718 "is_configured": false, 00:17:02.718 "data_offset": 0, 00:17:02.718 "data_size": 0 00:17:02.718 } 00:17:02.718 ] 00:17:02.718 }' 00:17:02.718 10:59:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.718 10:59:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.285 10:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:03.285 [2024-07-25 10:59:10.401433] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:03.285 [2024-07-25 10:59:10.401483] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:03.543 10:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:03.543 [2024-07-25 10:59:10.630122] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.543 [2024-07-25 10:59:10.630181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.543 [2024-07-25 10:59:10.630195] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.543 [2024-07-25 10:59:10.630215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.543 [2024-07-25 10:59:10.630226] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:03.543 [2024-07-25 10:59:10.630241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:03.543 10:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:03.801 [2024-07-25 10:59:10.912993] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.801 BaseBdev1 00:17:04.059 10:59:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:04.059 10:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:04.059 10:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:04.059 10:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:04.059 10:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:04.059 10:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:04.059 10:59:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:04.059 10:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:04.318 [ 00:17:04.318 { 00:17:04.318 "name": "BaseBdev1", 00:17:04.318 "aliases": [ 00:17:04.318 "30805a64-c687-4c91-8de5-70259b13a259" 00:17:04.318 ], 00:17:04.318 "product_name": "Malloc disk", 00:17:04.318 "block_size": 512, 00:17:04.318 "num_blocks": 65536, 00:17:04.318 "uuid": "30805a64-c687-4c91-8de5-70259b13a259", 00:17:04.318 "assigned_rate_limits": { 00:17:04.318 "rw_ios_per_sec": 0, 00:17:04.318 "rw_mbytes_per_sec": 0, 00:17:04.318 "r_mbytes_per_sec": 0, 00:17:04.318 "w_mbytes_per_sec": 0 00:17:04.318 }, 00:17:04.318 "claimed": true, 00:17:04.318 "claim_type": "exclusive_write", 00:17:04.318 "zoned": false, 00:17:04.318 "supported_io_types": { 00:17:04.318 "read": true, 00:17:04.318 "write": true, 00:17:04.318 "unmap": true, 00:17:04.318 "flush": true, 00:17:04.318 "reset": true, 00:17:04.318 "nvme_admin": false, 00:17:04.318 "nvme_io": false, 00:17:04.318 "nvme_io_md": false, 00:17:04.318 "write_zeroes": true, 00:17:04.318 "zcopy": true, 00:17:04.318 "get_zone_info": false, 00:17:04.318 "zone_management": false, 00:17:04.318 "zone_append": false, 00:17:04.318 "compare": false, 00:17:04.318 "compare_and_write": false, 00:17:04.318 "abort": true, 00:17:04.318 "seek_hole": false, 00:17:04.318 "seek_data": false, 00:17:04.318 "copy": true, 00:17:04.318 "nvme_iov_md": false 00:17:04.318 }, 00:17:04.318 "memory_domains": [ 00:17:04.318 { 00:17:04.318 "dma_device_id": "system", 00:17:04.318 "dma_device_type": 1 00:17:04.318 }, 00:17:04.318 { 00:17:04.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.318 "dma_device_type": 2 00:17:04.318 } 00:17:04.318 ], 00:17:04.318 "driver_specific": {} 00:17:04.318 } 00:17:04.318 ] 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:04.318 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.576 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.576 "name": "Existed_Raid", 00:17:04.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.576 "strip_size_kb": 64, 00:17:04.576 "state": "configuring", 00:17:04.576 "raid_level": "raid0", 00:17:04.576 "superblock": false, 00:17:04.576 "num_base_bdevs": 3, 00:17:04.576 "num_base_bdevs_discovered": 1, 00:17:04.576 "num_base_bdevs_operational": 3, 00:17:04.576 "base_bdevs_list": [ 00:17:04.576 { 00:17:04.576 "name": "BaseBdev1", 00:17:04.576 "uuid": "30805a64-c687-4c91-8de5-70259b13a259", 00:17:04.576 "is_configured": true, 00:17:04.576 "data_offset": 0, 00:17:04.576 "data_size": 65536 00:17:04.576 }, 00:17:04.576 { 00:17:04.576 "name": "BaseBdev2", 00:17:04.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.576 "is_configured": false, 00:17:04.576 "data_offset": 0, 00:17:04.576 "data_size": 0 00:17:04.576 }, 00:17:04.576 { 00:17:04.576 "name": "BaseBdev3", 00:17:04.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.576 "is_configured": false, 00:17:04.576 "data_offset": 0, 00:17:04.576 "data_size": 0 00:17:04.576 } 00:17:04.576 ] 00:17:04.576 }' 00:17:04.576 10:59:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.576 10:59:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.141 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:05.398 [2024-07-25 10:59:12.389010] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:05.398 [2024-07-25 10:59:12.389072] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:05.398 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:05.655 [2024-07-25 10:59:12.613706] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.655 [2024-07-25 10:59:12.616034] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.655 [2024-07-25 10:59:12.616082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.655 [2024-07-25 10:59:12.616097] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.655 [2024-07-25 10:59:12.616117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.655 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:05.655 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:05.655 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:05.655 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:05.655 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:05.655 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:05.655 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:05.655 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:05.656 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.656 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.656 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.656 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.656 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.656 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.913 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.913 "name": "Existed_Raid", 00:17:05.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.913 "strip_size_kb": 64, 00:17:05.913 "state": "configuring", 00:17:05.913 "raid_level": "raid0", 00:17:05.913 "superblock": false, 00:17:05.913 "num_base_bdevs": 3, 00:17:05.913 "num_base_bdevs_discovered": 1, 00:17:05.913 "num_base_bdevs_operational": 3, 00:17:05.913 "base_bdevs_list": [ 00:17:05.913 { 00:17:05.913 "name": "BaseBdev1", 00:17:05.913 "uuid": "30805a64-c687-4c91-8de5-70259b13a259", 00:17:05.913 "is_configured": true, 00:17:05.913 "data_offset": 0, 00:17:05.913 "data_size": 65536 00:17:05.913 }, 00:17:05.913 { 00:17:05.913 "name": "BaseBdev2", 00:17:05.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.913 "is_configured": false, 00:17:05.913 "data_offset": 0, 00:17:05.913 "data_size": 0 00:17:05.913 }, 00:17:05.913 { 00:17:05.913 "name": "BaseBdev3", 00:17:05.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.913 "is_configured": false, 00:17:05.913 "data_offset": 0, 00:17:05.913 "data_size": 0 00:17:05.913 } 00:17:05.913 ] 00:17:05.913 }' 00:17:05.913 10:59:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.913 10:59:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.476 10:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:06.733 [2024-07-25 10:59:13.698762] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:06.733 BaseBdev2 00:17:06.733 10:59:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:06.733 10:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:06.733 10:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:06.733 10:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:06.733 10:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:06.733 10:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:06.733 10:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:06.991 10:59:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:07.249 [ 00:17:07.249 { 00:17:07.249 "name": "BaseBdev2", 00:17:07.249 "aliases": [ 00:17:07.249 "1a9a1df1-7fd4-48ae-b1dc-fa57296d3f14" 00:17:07.249 ], 00:17:07.249 "product_name": "Malloc disk", 00:17:07.249 "block_size": 512, 00:17:07.249 "num_blocks": 65536, 00:17:07.249 "uuid": "1a9a1df1-7fd4-48ae-b1dc-fa57296d3f14", 00:17:07.249 "assigned_rate_limits": { 00:17:07.249 "rw_ios_per_sec": 0, 00:17:07.249 "rw_mbytes_per_sec": 0, 00:17:07.249 "r_mbytes_per_sec": 0, 00:17:07.249 "w_mbytes_per_sec": 0 00:17:07.249 }, 00:17:07.249 "claimed": true, 00:17:07.249 "claim_type": "exclusive_write", 00:17:07.249 "zoned": false, 00:17:07.249 "supported_io_types": { 00:17:07.249 "read": true, 00:17:07.249 "write": true, 00:17:07.249 "unmap": true, 00:17:07.249 "flush": true, 00:17:07.249 "reset": true, 00:17:07.249 "nvme_admin": false, 00:17:07.249 "nvme_io": false, 00:17:07.249 "nvme_io_md": false, 00:17:07.249 "write_zeroes": true, 00:17:07.249 "zcopy": true, 00:17:07.249 "get_zone_info": false, 00:17:07.249 "zone_management": false, 00:17:07.249 "zone_append": false, 00:17:07.249 "compare": false, 00:17:07.249 "compare_and_write": false, 00:17:07.249 "abort": true, 00:17:07.249 "seek_hole": false, 00:17:07.249 "seek_data": false, 00:17:07.249 "copy": true, 00:17:07.249 "nvme_iov_md": false 00:17:07.249 }, 00:17:07.249 "memory_domains": [ 00:17:07.249 { 00:17:07.249 "dma_device_id": "system", 00:17:07.249 "dma_device_type": 1 00:17:07.249 }, 00:17:07.249 { 00:17:07.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.249 "dma_device_type": 2 00:17:07.249 } 00:17:07.249 ], 00:17:07.249 "driver_specific": {} 00:17:07.249 } 00:17:07.249 ] 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.249 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.506 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.506 "name": "Existed_Raid", 00:17:07.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.506 "strip_size_kb": 64, 00:17:07.506 "state": "configuring", 00:17:07.506 "raid_level": "raid0", 00:17:07.506 "superblock": false, 00:17:07.506 "num_base_bdevs": 3, 00:17:07.506 "num_base_bdevs_discovered": 2, 00:17:07.506 "num_base_bdevs_operational": 3, 00:17:07.506 "base_bdevs_list": [ 00:17:07.506 { 00:17:07.506 "name": "BaseBdev1", 00:17:07.506 "uuid": "30805a64-c687-4c91-8de5-70259b13a259", 00:17:07.506 "is_configured": true, 00:17:07.506 "data_offset": 0, 00:17:07.506 "data_size": 65536 00:17:07.506 }, 00:17:07.506 { 00:17:07.506 "name": "BaseBdev2", 00:17:07.506 "uuid": "1a9a1df1-7fd4-48ae-b1dc-fa57296d3f14", 00:17:07.506 "is_configured": true, 00:17:07.506 "data_offset": 0, 00:17:07.506 "data_size": 65536 00:17:07.506 }, 00:17:07.506 { 00:17:07.506 "name": "BaseBdev3", 00:17:07.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.506 "is_configured": false, 00:17:07.506 "data_offset": 0, 00:17:07.506 "data_size": 0 00:17:07.506 } 00:17:07.506 ] 00:17:07.506 }' 00:17:07.506 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.506 10:59:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:08.072 10:59:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:08.072 [2024-07-25 10:59:15.161750] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:08.072 [2024-07-25 10:59:15.161798] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:08.072 [2024-07-25 10:59:15.161817] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:08.072 [2024-07-25 10:59:15.162159] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:17:08.072 [2024-07-25 10:59:15.162415] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:08.072 [2024-07-25 10:59:15.162431] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:08.072 [2024-07-25 10:59:15.162764] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.072 BaseBdev3 00:17:08.072 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:08.072 10:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:08.072 10:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:08.072 10:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:08.072 10:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:08.072 10:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:08.072 10:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:08.330 10:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:08.589 [ 00:17:08.589 { 00:17:08.589 "name": "BaseBdev3", 00:17:08.589 "aliases": [ 00:17:08.589 "384862f2-c8c5-49ed-846b-262900110b73" 00:17:08.589 ], 00:17:08.589 "product_name": "Malloc disk", 00:17:08.589 "block_size": 512, 00:17:08.589 "num_blocks": 65536, 00:17:08.589 "uuid": "384862f2-c8c5-49ed-846b-262900110b73", 00:17:08.589 "assigned_rate_limits": { 00:17:08.589 "rw_ios_per_sec": 0, 00:17:08.589 "rw_mbytes_per_sec": 0, 00:17:08.589 "r_mbytes_per_sec": 0, 00:17:08.589 "w_mbytes_per_sec": 0 00:17:08.589 }, 00:17:08.589 "claimed": true, 00:17:08.589 "claim_type": "exclusive_write", 00:17:08.589 "zoned": false, 00:17:08.589 "supported_io_types": { 00:17:08.589 "read": true, 00:17:08.589 "write": true, 00:17:08.589 "unmap": true, 00:17:08.589 "flush": true, 00:17:08.589 "reset": true, 00:17:08.589 "nvme_admin": false, 00:17:08.589 "nvme_io": false, 00:17:08.589 "nvme_io_md": false, 00:17:08.589 "write_zeroes": true, 00:17:08.589 "zcopy": true, 00:17:08.589 "get_zone_info": false, 00:17:08.589 "zone_management": false, 00:17:08.589 "zone_append": false, 00:17:08.589 "compare": false, 00:17:08.589 "compare_and_write": false, 00:17:08.589 "abort": true, 00:17:08.589 "seek_hole": false, 00:17:08.589 "seek_data": false, 00:17:08.589 "copy": true, 00:17:08.589 "nvme_iov_md": false 00:17:08.589 }, 00:17:08.589 "memory_domains": [ 00:17:08.589 { 00:17:08.589 "dma_device_id": "system", 00:17:08.589 "dma_device_type": 1 00:17:08.589 }, 00:17:08.589 { 00:17:08.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.589 "dma_device_type": 2 00:17:08.589 } 00:17:08.589 ], 00:17:08.589 "driver_specific": {} 00:17:08.589 } 00:17:08.589 ] 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.589 "name": "Existed_Raid", 00:17:08.589 "uuid": "e32aa019-117a-4eb3-b88c-f1d70924e048", 00:17:08.589 "strip_size_kb": 64, 00:17:08.589 "state": "online", 00:17:08.589 "raid_level": "raid0", 00:17:08.589 "superblock": false, 00:17:08.589 "num_base_bdevs": 3, 00:17:08.589 "num_base_bdevs_discovered": 3, 00:17:08.589 "num_base_bdevs_operational": 3, 00:17:08.589 "base_bdevs_list": [ 00:17:08.589 { 00:17:08.589 "name": "BaseBdev1", 00:17:08.589 "uuid": "30805a64-c687-4c91-8de5-70259b13a259", 00:17:08.589 "is_configured": true, 00:17:08.589 "data_offset": 0, 00:17:08.589 "data_size": 65536 00:17:08.589 }, 00:17:08.589 { 00:17:08.589 "name": "BaseBdev2", 00:17:08.589 "uuid": "1a9a1df1-7fd4-48ae-b1dc-fa57296d3f14", 00:17:08.589 "is_configured": true, 00:17:08.589 "data_offset": 0, 00:17:08.589 "data_size": 65536 00:17:08.589 }, 00:17:08.589 { 00:17:08.589 "name": "BaseBdev3", 00:17:08.589 "uuid": "384862f2-c8c5-49ed-846b-262900110b73", 00:17:08.589 "is_configured": true, 00:17:08.589 "data_offset": 0, 00:17:08.589 "data_size": 65536 00:17:08.589 } 00:17:08.589 ] 00:17:08.589 }' 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.589 10:59:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:09.157 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:09.157 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:09.157 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:09.157 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:09.157 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:09.157 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:09.157 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:09.157 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:09.416 [2024-07-25 10:59:16.381477] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.416 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:09.416 "name": "Existed_Raid", 00:17:09.416 "aliases": [ 00:17:09.416 "e32aa019-117a-4eb3-b88c-f1d70924e048" 00:17:09.416 ], 00:17:09.416 "product_name": "Raid Volume", 00:17:09.416 "block_size": 512, 00:17:09.416 "num_blocks": 196608, 00:17:09.416 "uuid": "e32aa019-117a-4eb3-b88c-f1d70924e048", 00:17:09.416 "assigned_rate_limits": { 00:17:09.416 "rw_ios_per_sec": 0, 00:17:09.416 "rw_mbytes_per_sec": 0, 00:17:09.416 "r_mbytes_per_sec": 0, 00:17:09.416 "w_mbytes_per_sec": 0 00:17:09.416 }, 00:17:09.416 "claimed": false, 00:17:09.416 "zoned": false, 00:17:09.416 "supported_io_types": { 00:17:09.416 "read": true, 00:17:09.416 "write": true, 00:17:09.416 "unmap": true, 00:17:09.416 "flush": true, 00:17:09.416 "reset": true, 00:17:09.416 "nvme_admin": false, 00:17:09.416 "nvme_io": false, 00:17:09.416 "nvme_io_md": false, 00:17:09.416 "write_zeroes": true, 00:17:09.416 "zcopy": false, 00:17:09.416 "get_zone_info": false, 00:17:09.416 "zone_management": false, 00:17:09.416 "zone_append": false, 00:17:09.416 "compare": false, 00:17:09.416 "compare_and_write": false, 00:17:09.416 "abort": false, 00:17:09.416 "seek_hole": false, 00:17:09.416 "seek_data": false, 00:17:09.416 "copy": false, 00:17:09.416 "nvme_iov_md": false 00:17:09.416 }, 00:17:09.416 "memory_domains": [ 00:17:09.416 { 00:17:09.416 "dma_device_id": "system", 00:17:09.416 "dma_device_type": 1 00:17:09.416 }, 00:17:09.416 { 00:17:09.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.416 "dma_device_type": 2 00:17:09.416 }, 00:17:09.416 { 00:17:09.416 "dma_device_id": "system", 00:17:09.416 "dma_device_type": 1 00:17:09.416 }, 00:17:09.416 { 00:17:09.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.416 "dma_device_type": 2 00:17:09.416 }, 00:17:09.416 { 00:17:09.416 "dma_device_id": "system", 00:17:09.416 "dma_device_type": 1 00:17:09.416 }, 00:17:09.416 { 00:17:09.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.416 "dma_device_type": 2 00:17:09.416 } 00:17:09.416 ], 00:17:09.416 "driver_specific": { 00:17:09.416 "raid": { 00:17:09.416 "uuid": "e32aa019-117a-4eb3-b88c-f1d70924e048", 00:17:09.416 "strip_size_kb": 64, 00:17:09.416 "state": "online", 00:17:09.416 "raid_level": "raid0", 00:17:09.416 "superblock": false, 00:17:09.416 "num_base_bdevs": 3, 00:17:09.416 "num_base_bdevs_discovered": 3, 00:17:09.416 "num_base_bdevs_operational": 3, 00:17:09.416 "base_bdevs_list": [ 00:17:09.416 { 00:17:09.416 "name": "BaseBdev1", 00:17:09.416 "uuid": "30805a64-c687-4c91-8de5-70259b13a259", 00:17:09.416 "is_configured": true, 00:17:09.416 "data_offset": 0, 00:17:09.416 "data_size": 65536 00:17:09.416 }, 00:17:09.416 { 00:17:09.416 "name": "BaseBdev2", 00:17:09.416 "uuid": "1a9a1df1-7fd4-48ae-b1dc-fa57296d3f14", 00:17:09.416 "is_configured": true, 00:17:09.416 "data_offset": 0, 00:17:09.416 "data_size": 65536 00:17:09.416 }, 00:17:09.416 { 00:17:09.416 "name": "BaseBdev3", 00:17:09.416 "uuid": "384862f2-c8c5-49ed-846b-262900110b73", 00:17:09.416 "is_configured": true, 00:17:09.416 "data_offset": 0, 00:17:09.416 "data_size": 65536 00:17:09.416 } 00:17:09.416 ] 00:17:09.416 } 00:17:09.416 } 00:17:09.416 }' 00:17:09.416 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.416 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:09.416 BaseBdev2 00:17:09.416 BaseBdev3' 00:17:09.416 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:09.416 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:09.416 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:09.675 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:09.675 "name": "BaseBdev1", 00:17:09.675 "aliases": [ 00:17:09.675 "30805a64-c687-4c91-8de5-70259b13a259" 00:17:09.675 ], 00:17:09.675 "product_name": "Malloc disk", 00:17:09.675 "block_size": 512, 00:17:09.675 "num_blocks": 65536, 00:17:09.675 "uuid": "30805a64-c687-4c91-8de5-70259b13a259", 00:17:09.675 "assigned_rate_limits": { 00:17:09.675 "rw_ios_per_sec": 0, 00:17:09.675 "rw_mbytes_per_sec": 0, 00:17:09.675 "r_mbytes_per_sec": 0, 00:17:09.675 "w_mbytes_per_sec": 0 00:17:09.675 }, 00:17:09.675 "claimed": true, 00:17:09.675 "claim_type": "exclusive_write", 00:17:09.675 "zoned": false, 00:17:09.675 "supported_io_types": { 00:17:09.675 "read": true, 00:17:09.675 "write": true, 00:17:09.675 "unmap": true, 00:17:09.675 "flush": true, 00:17:09.675 "reset": true, 00:17:09.675 "nvme_admin": false, 00:17:09.675 "nvme_io": false, 00:17:09.675 "nvme_io_md": false, 00:17:09.675 "write_zeroes": true, 00:17:09.675 "zcopy": true, 00:17:09.675 "get_zone_info": false, 00:17:09.675 "zone_management": false, 00:17:09.675 "zone_append": false, 00:17:09.675 "compare": false, 00:17:09.675 "compare_and_write": false, 00:17:09.675 "abort": true, 00:17:09.675 "seek_hole": false, 00:17:09.675 "seek_data": false, 00:17:09.675 "copy": true, 00:17:09.675 "nvme_iov_md": false 00:17:09.675 }, 00:17:09.675 "memory_domains": [ 00:17:09.675 { 00:17:09.675 "dma_device_id": "system", 00:17:09.675 "dma_device_type": 1 00:17:09.675 }, 00:17:09.675 { 00:17:09.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.675 "dma_device_type": 2 00:17:09.675 } 00:17:09.675 ], 00:17:09.675 "driver_specific": {} 00:17:09.675 }' 00:17:09.675 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:09.675 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:09.675 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:09.675 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:09.932 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:09.932 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:09.932 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:09.932 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:09.932 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:09.932 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:09.932 10:59:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:09.932 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:09.932 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:09.932 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:09.932 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:10.190 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:10.190 "name": "BaseBdev2", 00:17:10.190 "aliases": [ 00:17:10.190 "1a9a1df1-7fd4-48ae-b1dc-fa57296d3f14" 00:17:10.190 ], 00:17:10.190 "product_name": "Malloc disk", 00:17:10.190 "block_size": 512, 00:17:10.190 "num_blocks": 65536, 00:17:10.190 "uuid": "1a9a1df1-7fd4-48ae-b1dc-fa57296d3f14", 00:17:10.190 "assigned_rate_limits": { 00:17:10.190 "rw_ios_per_sec": 0, 00:17:10.190 "rw_mbytes_per_sec": 0, 00:17:10.190 "r_mbytes_per_sec": 0, 00:17:10.190 "w_mbytes_per_sec": 0 00:17:10.190 }, 00:17:10.190 "claimed": true, 00:17:10.190 "claim_type": "exclusive_write", 00:17:10.190 "zoned": false, 00:17:10.190 "supported_io_types": { 00:17:10.190 "read": true, 00:17:10.190 "write": true, 00:17:10.190 "unmap": true, 00:17:10.190 "flush": true, 00:17:10.190 "reset": true, 00:17:10.190 "nvme_admin": false, 00:17:10.190 "nvme_io": false, 00:17:10.190 "nvme_io_md": false, 00:17:10.190 "write_zeroes": true, 00:17:10.190 "zcopy": true, 00:17:10.190 "get_zone_info": false, 00:17:10.190 "zone_management": false, 00:17:10.190 "zone_append": false, 00:17:10.190 "compare": false, 00:17:10.190 "compare_and_write": false, 00:17:10.190 "abort": true, 00:17:10.190 "seek_hole": false, 00:17:10.190 "seek_data": false, 00:17:10.190 "copy": true, 00:17:10.190 "nvme_iov_md": false 00:17:10.190 }, 00:17:10.190 "memory_domains": [ 00:17:10.190 { 00:17:10.190 "dma_device_id": "system", 00:17:10.190 "dma_device_type": 1 00:17:10.190 }, 00:17:10.190 { 00:17:10.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.190 "dma_device_type": 2 00:17:10.190 } 00:17:10.190 ], 00:17:10.190 "driver_specific": {} 00:17:10.190 }' 00:17:10.190 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:10.190 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:10.447 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:10.447 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:10.447 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:10.447 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:10.447 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:10.447 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:10.447 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:10.447 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:10.447 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:10.704 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:10.704 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:10.704 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:10.704 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:10.704 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:10.704 "name": "BaseBdev3", 00:17:10.704 "aliases": [ 00:17:10.704 "384862f2-c8c5-49ed-846b-262900110b73" 00:17:10.704 ], 00:17:10.704 "product_name": "Malloc disk", 00:17:10.704 "block_size": 512, 00:17:10.704 "num_blocks": 65536, 00:17:10.704 "uuid": "384862f2-c8c5-49ed-846b-262900110b73", 00:17:10.704 "assigned_rate_limits": { 00:17:10.704 "rw_ios_per_sec": 0, 00:17:10.704 "rw_mbytes_per_sec": 0, 00:17:10.704 "r_mbytes_per_sec": 0, 00:17:10.704 "w_mbytes_per_sec": 0 00:17:10.704 }, 00:17:10.704 "claimed": true, 00:17:10.704 "claim_type": "exclusive_write", 00:17:10.704 "zoned": false, 00:17:10.704 "supported_io_types": { 00:17:10.704 "read": true, 00:17:10.704 "write": true, 00:17:10.704 "unmap": true, 00:17:10.704 "flush": true, 00:17:10.704 "reset": true, 00:17:10.704 "nvme_admin": false, 00:17:10.704 "nvme_io": false, 00:17:10.704 "nvme_io_md": false, 00:17:10.704 "write_zeroes": true, 00:17:10.704 "zcopy": true, 00:17:10.704 "get_zone_info": false, 00:17:10.704 "zone_management": false, 00:17:10.704 "zone_append": false, 00:17:10.704 "compare": false, 00:17:10.704 "compare_and_write": false, 00:17:10.704 "abort": true, 00:17:10.704 "seek_hole": false, 00:17:10.704 "seek_data": false, 00:17:10.704 "copy": true, 00:17:10.704 "nvme_iov_md": false 00:17:10.704 }, 00:17:10.704 "memory_domains": [ 00:17:10.704 { 00:17:10.704 "dma_device_id": "system", 00:17:10.704 "dma_device_type": 1 00:17:10.704 }, 00:17:10.704 { 00:17:10.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.704 "dma_device_type": 2 00:17:10.704 } 00:17:10.704 ], 00:17:10.704 "driver_specific": {} 00:17:10.704 }' 00:17:10.704 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:10.962 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:10.962 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:10.962 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:10.962 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:10.962 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:10.962 10:59:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:10.962 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:11.219 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:11.219 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:11.219 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:11.219 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:11.219 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:11.477 [2024-07-25 10:59:18.362555] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:11.477 [2024-07-25 10:59:18.362590] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.477 [2024-07-25 10:59:18.362655] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.477 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.735 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.735 "name": "Existed_Raid", 00:17:11.735 "uuid": "e32aa019-117a-4eb3-b88c-f1d70924e048", 00:17:11.735 "strip_size_kb": 64, 00:17:11.735 "state": "offline", 00:17:11.735 "raid_level": "raid0", 00:17:11.735 "superblock": false, 00:17:11.735 "num_base_bdevs": 3, 00:17:11.735 "num_base_bdevs_discovered": 2, 00:17:11.735 "num_base_bdevs_operational": 2, 00:17:11.735 "base_bdevs_list": [ 00:17:11.735 { 00:17:11.735 "name": null, 00:17:11.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.735 "is_configured": false, 00:17:11.735 "data_offset": 0, 00:17:11.735 "data_size": 65536 00:17:11.735 }, 00:17:11.735 { 00:17:11.735 "name": "BaseBdev2", 00:17:11.735 "uuid": "1a9a1df1-7fd4-48ae-b1dc-fa57296d3f14", 00:17:11.735 "is_configured": true, 00:17:11.735 "data_offset": 0, 00:17:11.735 "data_size": 65536 00:17:11.735 }, 00:17:11.735 { 00:17:11.735 "name": "BaseBdev3", 00:17:11.736 "uuid": "384862f2-c8c5-49ed-846b-262900110b73", 00:17:11.736 "is_configured": true, 00:17:11.736 "data_offset": 0, 00:17:11.736 "data_size": 65536 00:17:11.736 } 00:17:11.736 ] 00:17:11.736 }' 00:17:11.736 10:59:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.736 10:59:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.302 10:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:12.302 10:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:12.302 10:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.302 10:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:12.560 10:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:12.560 10:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:12.560 10:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:12.560 [2024-07-25 10:59:19.658044] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:12.817 10:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:12.817 10:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:12.817 10:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.817 10:59:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:13.075 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:13.075 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:13.075 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:13.332 [2024-07-25 10:59:20.245255] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:13.333 [2024-07-25 10:59:20.245321] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:13.333 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:13.333 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:13.333 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.333 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:13.590 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:13.590 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:13.590 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:13.590 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:13.590 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:13.590 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:13.847 BaseBdev2 00:17:13.847 10:59:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:13.848 10:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:13.848 10:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:13.848 10:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:13.848 10:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:13.848 10:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:13.848 10:59:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:14.105 10:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:14.362 [ 00:17:14.362 { 00:17:14.362 "name": "BaseBdev2", 00:17:14.362 "aliases": [ 00:17:14.362 "0c100a63-2591-4b20-941b-9c3c646defc7" 00:17:14.362 ], 00:17:14.362 "product_name": "Malloc disk", 00:17:14.362 "block_size": 512, 00:17:14.362 "num_blocks": 65536, 00:17:14.362 "uuid": "0c100a63-2591-4b20-941b-9c3c646defc7", 00:17:14.362 "assigned_rate_limits": { 00:17:14.362 "rw_ios_per_sec": 0, 00:17:14.362 "rw_mbytes_per_sec": 0, 00:17:14.362 "r_mbytes_per_sec": 0, 00:17:14.362 "w_mbytes_per_sec": 0 00:17:14.362 }, 00:17:14.362 "claimed": false, 00:17:14.362 "zoned": false, 00:17:14.362 "supported_io_types": { 00:17:14.362 "read": true, 00:17:14.362 "write": true, 00:17:14.362 "unmap": true, 00:17:14.362 "flush": true, 00:17:14.362 "reset": true, 00:17:14.362 "nvme_admin": false, 00:17:14.362 "nvme_io": false, 00:17:14.362 "nvme_io_md": false, 00:17:14.362 "write_zeroes": true, 00:17:14.362 "zcopy": true, 00:17:14.362 "get_zone_info": false, 00:17:14.362 "zone_management": false, 00:17:14.362 "zone_append": false, 00:17:14.362 "compare": false, 00:17:14.362 "compare_and_write": false, 00:17:14.362 "abort": true, 00:17:14.362 "seek_hole": false, 00:17:14.362 "seek_data": false, 00:17:14.362 "copy": true, 00:17:14.363 "nvme_iov_md": false 00:17:14.363 }, 00:17:14.363 "memory_domains": [ 00:17:14.363 { 00:17:14.363 "dma_device_id": "system", 00:17:14.363 "dma_device_type": 1 00:17:14.363 }, 00:17:14.363 { 00:17:14.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.363 "dma_device_type": 2 00:17:14.363 } 00:17:14.363 ], 00:17:14.363 "driver_specific": {} 00:17:14.363 } 00:17:14.363 ] 00:17:14.363 10:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:14.363 10:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:14.363 10:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:14.363 10:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:14.621 BaseBdev3 00:17:14.621 10:59:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:14.621 10:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:14.621 10:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:14.621 10:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:14.621 10:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:14.621 10:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:14.621 10:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:14.910 10:59:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:14.910 [ 00:17:14.910 { 00:17:14.910 "name": "BaseBdev3", 00:17:14.910 "aliases": [ 00:17:14.910 "823d27bc-a237-40b6-981d-268d8bf4c350" 00:17:14.910 ], 00:17:14.910 "product_name": "Malloc disk", 00:17:14.910 "block_size": 512, 00:17:14.910 "num_blocks": 65536, 00:17:14.910 "uuid": "823d27bc-a237-40b6-981d-268d8bf4c350", 00:17:14.910 "assigned_rate_limits": { 00:17:14.910 "rw_ios_per_sec": 0, 00:17:14.910 "rw_mbytes_per_sec": 0, 00:17:14.910 "r_mbytes_per_sec": 0, 00:17:14.910 "w_mbytes_per_sec": 0 00:17:14.910 }, 00:17:14.910 "claimed": false, 00:17:14.910 "zoned": false, 00:17:14.910 "supported_io_types": { 00:17:14.910 "read": true, 00:17:14.910 "write": true, 00:17:14.910 "unmap": true, 00:17:14.910 "flush": true, 00:17:14.910 "reset": true, 00:17:14.910 "nvme_admin": false, 00:17:14.910 "nvme_io": false, 00:17:14.910 "nvme_io_md": false, 00:17:14.910 "write_zeroes": true, 00:17:14.910 "zcopy": true, 00:17:14.910 "get_zone_info": false, 00:17:14.910 "zone_management": false, 00:17:14.910 "zone_append": false, 00:17:14.910 "compare": false, 00:17:14.910 "compare_and_write": false, 00:17:14.910 "abort": true, 00:17:14.910 "seek_hole": false, 00:17:14.910 "seek_data": false, 00:17:14.910 "copy": true, 00:17:14.910 "nvme_iov_md": false 00:17:14.910 }, 00:17:14.910 "memory_domains": [ 00:17:14.910 { 00:17:14.911 "dma_device_id": "system", 00:17:14.911 "dma_device_type": 1 00:17:14.911 }, 00:17:14.911 { 00:17:14.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.911 "dma_device_type": 2 00:17:14.911 } 00:17:14.911 ], 00:17:14.911 "driver_specific": {} 00:17:14.911 } 00:17:14.911 ] 00:17:14.911 10:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:14.911 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:14.911 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:14.911 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:15.169 [2024-07-25 10:59:22.220332] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:15.169 [2024-07-25 10:59:22.220381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:15.169 [2024-07-25 10:59:22.220414] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.169 [2024-07-25 10:59:22.222715] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:15.169 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:15.169 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.169 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:15.169 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:15.169 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:15.169 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:15.169 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.169 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.169 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.169 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.169 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.169 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.427 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.427 "name": "Existed_Raid", 00:17:15.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.427 "strip_size_kb": 64, 00:17:15.427 "state": "configuring", 00:17:15.427 "raid_level": "raid0", 00:17:15.427 "superblock": false, 00:17:15.427 "num_base_bdevs": 3, 00:17:15.427 "num_base_bdevs_discovered": 2, 00:17:15.427 "num_base_bdevs_operational": 3, 00:17:15.427 "base_bdevs_list": [ 00:17:15.427 { 00:17:15.427 "name": "BaseBdev1", 00:17:15.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.427 "is_configured": false, 00:17:15.427 "data_offset": 0, 00:17:15.427 "data_size": 0 00:17:15.427 }, 00:17:15.427 { 00:17:15.427 "name": "BaseBdev2", 00:17:15.427 "uuid": "0c100a63-2591-4b20-941b-9c3c646defc7", 00:17:15.427 "is_configured": true, 00:17:15.427 "data_offset": 0, 00:17:15.427 "data_size": 65536 00:17:15.427 }, 00:17:15.427 { 00:17:15.427 "name": "BaseBdev3", 00:17:15.427 "uuid": "823d27bc-a237-40b6-981d-268d8bf4c350", 00:17:15.427 "is_configured": true, 00:17:15.427 "data_offset": 0, 00:17:15.427 "data_size": 65536 00:17:15.427 } 00:17:15.427 ] 00:17:15.427 }' 00:17:15.427 10:59:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.427 10:59:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.992 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:16.250 [2024-07-25 10:59:23.279169] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:16.250 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:16.250 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:16.250 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:16.250 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:16.250 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:16.250 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:16.250 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.250 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.250 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.250 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.251 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.251 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.507 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.507 "name": "Existed_Raid", 00:17:16.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.507 "strip_size_kb": 64, 00:17:16.507 "state": "configuring", 00:17:16.507 "raid_level": "raid0", 00:17:16.507 "superblock": false, 00:17:16.507 "num_base_bdevs": 3, 00:17:16.507 "num_base_bdevs_discovered": 1, 00:17:16.507 "num_base_bdevs_operational": 3, 00:17:16.507 "base_bdevs_list": [ 00:17:16.507 { 00:17:16.507 "name": "BaseBdev1", 00:17:16.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.507 "is_configured": false, 00:17:16.507 "data_offset": 0, 00:17:16.507 "data_size": 0 00:17:16.507 }, 00:17:16.507 { 00:17:16.507 "name": null, 00:17:16.507 "uuid": "0c100a63-2591-4b20-941b-9c3c646defc7", 00:17:16.507 "is_configured": false, 00:17:16.507 "data_offset": 0, 00:17:16.507 "data_size": 65536 00:17:16.507 }, 00:17:16.507 { 00:17:16.507 "name": "BaseBdev3", 00:17:16.507 "uuid": "823d27bc-a237-40b6-981d-268d8bf4c350", 00:17:16.507 "is_configured": true, 00:17:16.507 "data_offset": 0, 00:17:16.507 "data_size": 65536 00:17:16.507 } 00:17:16.507 ] 00:17:16.507 }' 00:17:16.507 10:59:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.507 10:59:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.071 10:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.071 10:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:17.329 10:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:17.329 10:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:17.587 [2024-07-25 10:59:24.560420] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:17.587 BaseBdev1 00:17:17.587 10:59:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:17.587 10:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:17.587 10:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:17.587 10:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:17.587 10:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:17.587 10:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:17.587 10:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:17.844 10:59:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:18.102 [ 00:17:18.102 { 00:17:18.102 "name": "BaseBdev1", 00:17:18.102 "aliases": [ 00:17:18.102 "b49db9da-6269-4203-8545-8e1154bb335a" 00:17:18.102 ], 00:17:18.102 "product_name": "Malloc disk", 00:17:18.102 "block_size": 512, 00:17:18.102 "num_blocks": 65536, 00:17:18.102 "uuid": "b49db9da-6269-4203-8545-8e1154bb335a", 00:17:18.102 "assigned_rate_limits": { 00:17:18.102 "rw_ios_per_sec": 0, 00:17:18.102 "rw_mbytes_per_sec": 0, 00:17:18.102 "r_mbytes_per_sec": 0, 00:17:18.102 "w_mbytes_per_sec": 0 00:17:18.102 }, 00:17:18.102 "claimed": true, 00:17:18.102 "claim_type": "exclusive_write", 00:17:18.102 "zoned": false, 00:17:18.102 "supported_io_types": { 00:17:18.102 "read": true, 00:17:18.102 "write": true, 00:17:18.102 "unmap": true, 00:17:18.102 "flush": true, 00:17:18.102 "reset": true, 00:17:18.102 "nvme_admin": false, 00:17:18.102 "nvme_io": false, 00:17:18.102 "nvme_io_md": false, 00:17:18.102 "write_zeroes": true, 00:17:18.102 "zcopy": true, 00:17:18.102 "get_zone_info": false, 00:17:18.102 "zone_management": false, 00:17:18.102 "zone_append": false, 00:17:18.102 "compare": false, 00:17:18.102 "compare_and_write": false, 00:17:18.102 "abort": true, 00:17:18.102 "seek_hole": false, 00:17:18.102 "seek_data": false, 00:17:18.102 "copy": true, 00:17:18.102 "nvme_iov_md": false 00:17:18.102 }, 00:17:18.102 "memory_domains": [ 00:17:18.102 { 00:17:18.102 "dma_device_id": "system", 00:17:18.102 "dma_device_type": 1 00:17:18.102 }, 00:17:18.102 { 00:17:18.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.102 "dma_device_type": 2 00:17:18.102 } 00:17:18.102 ], 00:17:18.102 "driver_specific": {} 00:17:18.102 } 00:17:18.102 ] 00:17:18.102 10:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:18.102 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:18.102 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.102 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:18.102 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:18.102 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:18.102 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:18.102 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.102 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.102 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.102 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.102 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.103 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.360 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.360 "name": "Existed_Raid", 00:17:18.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.360 "strip_size_kb": 64, 00:17:18.360 "state": "configuring", 00:17:18.360 "raid_level": "raid0", 00:17:18.360 "superblock": false, 00:17:18.360 "num_base_bdevs": 3, 00:17:18.360 "num_base_bdevs_discovered": 2, 00:17:18.360 "num_base_bdevs_operational": 3, 00:17:18.360 "base_bdevs_list": [ 00:17:18.360 { 00:17:18.360 "name": "BaseBdev1", 00:17:18.360 "uuid": "b49db9da-6269-4203-8545-8e1154bb335a", 00:17:18.360 "is_configured": true, 00:17:18.360 "data_offset": 0, 00:17:18.360 "data_size": 65536 00:17:18.360 }, 00:17:18.360 { 00:17:18.360 "name": null, 00:17:18.360 "uuid": "0c100a63-2591-4b20-941b-9c3c646defc7", 00:17:18.360 "is_configured": false, 00:17:18.360 "data_offset": 0, 00:17:18.360 "data_size": 65536 00:17:18.360 }, 00:17:18.360 { 00:17:18.360 "name": "BaseBdev3", 00:17:18.360 "uuid": "823d27bc-a237-40b6-981d-268d8bf4c350", 00:17:18.360 "is_configured": true, 00:17:18.360 "data_offset": 0, 00:17:18.361 "data_size": 65536 00:17:18.361 } 00:17:18.361 ] 00:17:18.361 }' 00:17:18.361 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.361 10:59:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.925 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:18.925 10:59:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.182 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:19.182 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:19.182 [2024-07-25 10:59:26.289210] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.441 "name": "Existed_Raid", 00:17:19.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.441 "strip_size_kb": 64, 00:17:19.441 "state": "configuring", 00:17:19.441 "raid_level": "raid0", 00:17:19.441 "superblock": false, 00:17:19.441 "num_base_bdevs": 3, 00:17:19.441 "num_base_bdevs_discovered": 1, 00:17:19.441 "num_base_bdevs_operational": 3, 00:17:19.441 "base_bdevs_list": [ 00:17:19.441 { 00:17:19.441 "name": "BaseBdev1", 00:17:19.441 "uuid": "b49db9da-6269-4203-8545-8e1154bb335a", 00:17:19.441 "is_configured": true, 00:17:19.441 "data_offset": 0, 00:17:19.441 "data_size": 65536 00:17:19.441 }, 00:17:19.441 { 00:17:19.441 "name": null, 00:17:19.441 "uuid": "0c100a63-2591-4b20-941b-9c3c646defc7", 00:17:19.441 "is_configured": false, 00:17:19.441 "data_offset": 0, 00:17:19.441 "data_size": 65536 00:17:19.441 }, 00:17:19.441 { 00:17:19.441 "name": null, 00:17:19.441 "uuid": "823d27bc-a237-40b6-981d-268d8bf4c350", 00:17:19.441 "is_configured": false, 00:17:19.441 "data_offset": 0, 00:17:19.441 "data_size": 65536 00:17:19.441 } 00:17:19.441 ] 00:17:19.441 }' 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.441 10:59:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.377 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.377 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:20.377 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:20.377 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:20.636 [2024-07-25 10:59:27.564680] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:20.636 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:20.636 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:20.636 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:20.636 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:20.636 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:20.636 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:20.636 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.636 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.636 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.636 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.636 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.636 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.895 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.895 "name": "Existed_Raid", 00:17:20.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.895 "strip_size_kb": 64, 00:17:20.895 "state": "configuring", 00:17:20.895 "raid_level": "raid0", 00:17:20.895 "superblock": false, 00:17:20.895 "num_base_bdevs": 3, 00:17:20.895 "num_base_bdevs_discovered": 2, 00:17:20.895 "num_base_bdevs_operational": 3, 00:17:20.895 "base_bdevs_list": [ 00:17:20.895 { 00:17:20.895 "name": "BaseBdev1", 00:17:20.895 "uuid": "b49db9da-6269-4203-8545-8e1154bb335a", 00:17:20.895 "is_configured": true, 00:17:20.895 "data_offset": 0, 00:17:20.895 "data_size": 65536 00:17:20.895 }, 00:17:20.895 { 00:17:20.895 "name": null, 00:17:20.895 "uuid": "0c100a63-2591-4b20-941b-9c3c646defc7", 00:17:20.895 "is_configured": false, 00:17:20.895 "data_offset": 0, 00:17:20.895 "data_size": 65536 00:17:20.895 }, 00:17:20.895 { 00:17:20.895 "name": "BaseBdev3", 00:17:20.895 "uuid": "823d27bc-a237-40b6-981d-268d8bf4c350", 00:17:20.895 "is_configured": true, 00:17:20.895 "data_offset": 0, 00:17:20.895 "data_size": 65536 00:17:20.895 } 00:17:20.895 ] 00:17:20.895 }' 00:17:20.895 10:59:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.895 10:59:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.464 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.464 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:21.723 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:21.723 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:21.983 [2024-07-25 10:59:28.848210] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:21.983 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:21.983 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:21.983 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:21.983 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:21.983 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:21.983 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:21.983 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.983 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.983 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.983 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.983 10:59:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.983 10:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.550 10:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.550 "name": "Existed_Raid", 00:17:22.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.550 "strip_size_kb": 64, 00:17:22.550 "state": "configuring", 00:17:22.550 "raid_level": "raid0", 00:17:22.550 "superblock": false, 00:17:22.550 "num_base_bdevs": 3, 00:17:22.550 "num_base_bdevs_discovered": 1, 00:17:22.550 "num_base_bdevs_operational": 3, 00:17:22.550 "base_bdevs_list": [ 00:17:22.550 { 00:17:22.550 "name": null, 00:17:22.550 "uuid": "b49db9da-6269-4203-8545-8e1154bb335a", 00:17:22.550 "is_configured": false, 00:17:22.550 "data_offset": 0, 00:17:22.550 "data_size": 65536 00:17:22.550 }, 00:17:22.550 { 00:17:22.550 "name": null, 00:17:22.550 "uuid": "0c100a63-2591-4b20-941b-9c3c646defc7", 00:17:22.550 "is_configured": false, 00:17:22.550 "data_offset": 0, 00:17:22.550 "data_size": 65536 00:17:22.550 }, 00:17:22.550 { 00:17:22.550 "name": "BaseBdev3", 00:17:22.550 "uuid": "823d27bc-a237-40b6-981d-268d8bf4c350", 00:17:22.550 "is_configured": true, 00:17:22.550 "data_offset": 0, 00:17:22.550 "data_size": 65536 00:17:22.550 } 00:17:22.550 ] 00:17:22.550 }' 00:17:22.550 10:59:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.550 10:59:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.118 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.118 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:23.377 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:23.377 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:23.635 [2024-07-25 10:59:30.535844] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:23.635 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:23.635 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:23.635 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:23.635 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:23.635 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:23.635 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:23.636 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.636 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.636 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.636 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.636 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.636 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.893 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.893 "name": "Existed_Raid", 00:17:23.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.893 "strip_size_kb": 64, 00:17:23.893 "state": "configuring", 00:17:23.893 "raid_level": "raid0", 00:17:23.893 "superblock": false, 00:17:23.893 "num_base_bdevs": 3, 00:17:23.893 "num_base_bdevs_discovered": 2, 00:17:23.893 "num_base_bdevs_operational": 3, 00:17:23.893 "base_bdevs_list": [ 00:17:23.893 { 00:17:23.893 "name": null, 00:17:23.893 "uuid": "b49db9da-6269-4203-8545-8e1154bb335a", 00:17:23.893 "is_configured": false, 00:17:23.893 "data_offset": 0, 00:17:23.893 "data_size": 65536 00:17:23.893 }, 00:17:23.893 { 00:17:23.893 "name": "BaseBdev2", 00:17:23.893 "uuid": "0c100a63-2591-4b20-941b-9c3c646defc7", 00:17:23.893 "is_configured": true, 00:17:23.893 "data_offset": 0, 00:17:23.893 "data_size": 65536 00:17:23.893 }, 00:17:23.893 { 00:17:23.893 "name": "BaseBdev3", 00:17:23.893 "uuid": "823d27bc-a237-40b6-981d-268d8bf4c350", 00:17:23.893 "is_configured": true, 00:17:23.893 "data_offset": 0, 00:17:23.893 "data_size": 65536 00:17:23.893 } 00:17:23.893 ] 00:17:23.893 }' 00:17:23.893 10:59:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.893 10:59:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.460 10:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.460 10:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:24.717 10:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:24.717 10:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.717 10:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:24.717 10:59:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b49db9da-6269-4203-8545-8e1154bb335a 00:17:24.975 [2024-07-25 10:59:32.073740] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:24.975 [2024-07-25 10:59:32.073784] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:24.975 [2024-07-25 10:59:32.073799] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:24.975 [2024-07-25 10:59:32.074108] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:17:24.975 [2024-07-25 10:59:32.074347] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:24.975 [2024-07-25 10:59:32.074363] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:24.975 [2024-07-25 10:59:32.074684] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.975 NewBaseBdev 00:17:25.233 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:25.233 10:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:25.233 10:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:25.233 10:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:17:25.233 10:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:25.233 10:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:25.233 10:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.233 10:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:25.492 [ 00:17:25.492 { 00:17:25.492 "name": "NewBaseBdev", 00:17:25.492 "aliases": [ 00:17:25.492 "b49db9da-6269-4203-8545-8e1154bb335a" 00:17:25.492 ], 00:17:25.492 "product_name": "Malloc disk", 00:17:25.492 "block_size": 512, 00:17:25.492 "num_blocks": 65536, 00:17:25.492 "uuid": "b49db9da-6269-4203-8545-8e1154bb335a", 00:17:25.492 "assigned_rate_limits": { 00:17:25.492 "rw_ios_per_sec": 0, 00:17:25.492 "rw_mbytes_per_sec": 0, 00:17:25.492 "r_mbytes_per_sec": 0, 00:17:25.492 "w_mbytes_per_sec": 0 00:17:25.492 }, 00:17:25.492 "claimed": true, 00:17:25.492 "claim_type": "exclusive_write", 00:17:25.492 "zoned": false, 00:17:25.492 "supported_io_types": { 00:17:25.492 "read": true, 00:17:25.492 "write": true, 00:17:25.492 "unmap": true, 00:17:25.492 "flush": true, 00:17:25.492 "reset": true, 00:17:25.492 "nvme_admin": false, 00:17:25.492 "nvme_io": false, 00:17:25.492 "nvme_io_md": false, 00:17:25.492 "write_zeroes": true, 00:17:25.492 "zcopy": true, 00:17:25.492 "get_zone_info": false, 00:17:25.492 "zone_management": false, 00:17:25.492 "zone_append": false, 00:17:25.492 "compare": false, 00:17:25.492 "compare_and_write": false, 00:17:25.492 "abort": true, 00:17:25.492 "seek_hole": false, 00:17:25.492 "seek_data": false, 00:17:25.492 "copy": true, 00:17:25.492 "nvme_iov_md": false 00:17:25.492 }, 00:17:25.492 "memory_domains": [ 00:17:25.492 { 00:17:25.492 "dma_device_id": "system", 00:17:25.492 "dma_device_type": 1 00:17:25.492 }, 00:17:25.492 { 00:17:25.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.492 "dma_device_type": 2 00:17:25.492 } 00:17:25.492 ], 00:17:25.492 "driver_specific": {} 00:17:25.492 } 00:17:25.492 ] 00:17:25.492 10:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:17:25.492 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:25.492 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:25.492 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:25.492 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:25.492 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:25.492 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:25.492 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.492 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.492 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.492 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.493 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.493 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.751 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.751 "name": "Existed_Raid", 00:17:25.751 "uuid": "dc96edfc-d5f7-48c7-bdb8-13a0b4eabbc3", 00:17:25.751 "strip_size_kb": 64, 00:17:25.751 "state": "online", 00:17:25.751 "raid_level": "raid0", 00:17:25.751 "superblock": false, 00:17:25.751 "num_base_bdevs": 3, 00:17:25.751 "num_base_bdevs_discovered": 3, 00:17:25.751 "num_base_bdevs_operational": 3, 00:17:25.752 "base_bdevs_list": [ 00:17:25.752 { 00:17:25.752 "name": "NewBaseBdev", 00:17:25.752 "uuid": "b49db9da-6269-4203-8545-8e1154bb335a", 00:17:25.752 "is_configured": true, 00:17:25.752 "data_offset": 0, 00:17:25.752 "data_size": 65536 00:17:25.752 }, 00:17:25.752 { 00:17:25.752 "name": "BaseBdev2", 00:17:25.752 "uuid": "0c100a63-2591-4b20-941b-9c3c646defc7", 00:17:25.752 "is_configured": true, 00:17:25.752 "data_offset": 0, 00:17:25.752 "data_size": 65536 00:17:25.752 }, 00:17:25.752 { 00:17:25.752 "name": "BaseBdev3", 00:17:25.752 "uuid": "823d27bc-a237-40b6-981d-268d8bf4c350", 00:17:25.752 "is_configured": true, 00:17:25.752 "data_offset": 0, 00:17:25.752 "data_size": 65536 00:17:25.752 } 00:17:25.752 ] 00:17:25.752 }' 00:17:25.752 10:59:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.752 10:59:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.320 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:26.320 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:26.320 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:26.320 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:26.320 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:26.320 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:26.320 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:26.320 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:26.579 [2024-07-25 10:59:33.566249] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.579 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:26.579 "name": "Existed_Raid", 00:17:26.579 "aliases": [ 00:17:26.579 "dc96edfc-d5f7-48c7-bdb8-13a0b4eabbc3" 00:17:26.579 ], 00:17:26.579 "product_name": "Raid Volume", 00:17:26.579 "block_size": 512, 00:17:26.579 "num_blocks": 196608, 00:17:26.579 "uuid": "dc96edfc-d5f7-48c7-bdb8-13a0b4eabbc3", 00:17:26.579 "assigned_rate_limits": { 00:17:26.579 "rw_ios_per_sec": 0, 00:17:26.579 "rw_mbytes_per_sec": 0, 00:17:26.579 "r_mbytes_per_sec": 0, 00:17:26.579 "w_mbytes_per_sec": 0 00:17:26.579 }, 00:17:26.579 "claimed": false, 00:17:26.579 "zoned": false, 00:17:26.579 "supported_io_types": { 00:17:26.579 "read": true, 00:17:26.579 "write": true, 00:17:26.579 "unmap": true, 00:17:26.579 "flush": true, 00:17:26.579 "reset": true, 00:17:26.579 "nvme_admin": false, 00:17:26.579 "nvme_io": false, 00:17:26.579 "nvme_io_md": false, 00:17:26.579 "write_zeroes": true, 00:17:26.579 "zcopy": false, 00:17:26.579 "get_zone_info": false, 00:17:26.579 "zone_management": false, 00:17:26.579 "zone_append": false, 00:17:26.579 "compare": false, 00:17:26.579 "compare_and_write": false, 00:17:26.579 "abort": false, 00:17:26.579 "seek_hole": false, 00:17:26.579 "seek_data": false, 00:17:26.579 "copy": false, 00:17:26.579 "nvme_iov_md": false 00:17:26.579 }, 00:17:26.579 "memory_domains": [ 00:17:26.579 { 00:17:26.579 "dma_device_id": "system", 00:17:26.579 "dma_device_type": 1 00:17:26.579 }, 00:17:26.579 { 00:17:26.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.579 "dma_device_type": 2 00:17:26.579 }, 00:17:26.579 { 00:17:26.579 "dma_device_id": "system", 00:17:26.579 "dma_device_type": 1 00:17:26.579 }, 00:17:26.579 { 00:17:26.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.579 "dma_device_type": 2 00:17:26.579 }, 00:17:26.579 { 00:17:26.579 "dma_device_id": "system", 00:17:26.579 "dma_device_type": 1 00:17:26.579 }, 00:17:26.579 { 00:17:26.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.579 "dma_device_type": 2 00:17:26.579 } 00:17:26.579 ], 00:17:26.579 "driver_specific": { 00:17:26.579 "raid": { 00:17:26.579 "uuid": "dc96edfc-d5f7-48c7-bdb8-13a0b4eabbc3", 00:17:26.579 "strip_size_kb": 64, 00:17:26.579 "state": "online", 00:17:26.579 "raid_level": "raid0", 00:17:26.579 "superblock": false, 00:17:26.579 "num_base_bdevs": 3, 00:17:26.579 "num_base_bdevs_discovered": 3, 00:17:26.579 "num_base_bdevs_operational": 3, 00:17:26.579 "base_bdevs_list": [ 00:17:26.579 { 00:17:26.579 "name": "NewBaseBdev", 00:17:26.579 "uuid": "b49db9da-6269-4203-8545-8e1154bb335a", 00:17:26.579 "is_configured": true, 00:17:26.579 "data_offset": 0, 00:17:26.579 "data_size": 65536 00:17:26.579 }, 00:17:26.579 { 00:17:26.579 "name": "BaseBdev2", 00:17:26.579 "uuid": "0c100a63-2591-4b20-941b-9c3c646defc7", 00:17:26.579 "is_configured": true, 00:17:26.579 "data_offset": 0, 00:17:26.579 "data_size": 65536 00:17:26.579 }, 00:17:26.579 { 00:17:26.579 "name": "BaseBdev3", 00:17:26.579 "uuid": "823d27bc-a237-40b6-981d-268d8bf4c350", 00:17:26.579 "is_configured": true, 00:17:26.579 "data_offset": 0, 00:17:26.579 "data_size": 65536 00:17:26.579 } 00:17:26.579 ] 00:17:26.579 } 00:17:26.579 } 00:17:26.579 }' 00:17:26.579 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.579 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:26.579 BaseBdev2 00:17:26.579 BaseBdev3' 00:17:26.579 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:26.579 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:26.579 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:26.839 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:26.839 "name": "NewBaseBdev", 00:17:26.839 "aliases": [ 00:17:26.839 "b49db9da-6269-4203-8545-8e1154bb335a" 00:17:26.839 ], 00:17:26.839 "product_name": "Malloc disk", 00:17:26.839 "block_size": 512, 00:17:26.839 "num_blocks": 65536, 00:17:26.839 "uuid": "b49db9da-6269-4203-8545-8e1154bb335a", 00:17:26.839 "assigned_rate_limits": { 00:17:26.839 "rw_ios_per_sec": 0, 00:17:26.839 "rw_mbytes_per_sec": 0, 00:17:26.839 "r_mbytes_per_sec": 0, 00:17:26.839 "w_mbytes_per_sec": 0 00:17:26.839 }, 00:17:26.839 "claimed": true, 00:17:26.839 "claim_type": "exclusive_write", 00:17:26.839 "zoned": false, 00:17:26.839 "supported_io_types": { 00:17:26.839 "read": true, 00:17:26.839 "write": true, 00:17:26.839 "unmap": true, 00:17:26.839 "flush": true, 00:17:26.839 "reset": true, 00:17:26.839 "nvme_admin": false, 00:17:26.839 "nvme_io": false, 00:17:26.839 "nvme_io_md": false, 00:17:26.839 "write_zeroes": true, 00:17:26.839 "zcopy": true, 00:17:26.839 "get_zone_info": false, 00:17:26.839 "zone_management": false, 00:17:26.839 "zone_append": false, 00:17:26.839 "compare": false, 00:17:26.839 "compare_and_write": false, 00:17:26.839 "abort": true, 00:17:26.839 "seek_hole": false, 00:17:26.839 "seek_data": false, 00:17:26.839 "copy": true, 00:17:26.839 "nvme_iov_md": false 00:17:26.839 }, 00:17:26.839 "memory_domains": [ 00:17:26.839 { 00:17:26.839 "dma_device_id": "system", 00:17:26.839 "dma_device_type": 1 00:17:26.839 }, 00:17:26.839 { 00:17:26.839 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.839 "dma_device_type": 2 00:17:26.839 } 00:17:26.839 ], 00:17:26.839 "driver_specific": {} 00:17:26.839 }' 00:17:26.839 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.839 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.839 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:26.839 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.098 10:59:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.098 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.098 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.098 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.098 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:27.098 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.098 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.098 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:27.098 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.098 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:27.098 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.358 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.358 "name": "BaseBdev2", 00:17:27.358 "aliases": [ 00:17:27.358 "0c100a63-2591-4b20-941b-9c3c646defc7" 00:17:27.358 ], 00:17:27.358 "product_name": "Malloc disk", 00:17:27.358 "block_size": 512, 00:17:27.358 "num_blocks": 65536, 00:17:27.358 "uuid": "0c100a63-2591-4b20-941b-9c3c646defc7", 00:17:27.358 "assigned_rate_limits": { 00:17:27.358 "rw_ios_per_sec": 0, 00:17:27.358 "rw_mbytes_per_sec": 0, 00:17:27.358 "r_mbytes_per_sec": 0, 00:17:27.358 "w_mbytes_per_sec": 0 00:17:27.358 }, 00:17:27.358 "claimed": true, 00:17:27.358 "claim_type": "exclusive_write", 00:17:27.358 "zoned": false, 00:17:27.358 "supported_io_types": { 00:17:27.358 "read": true, 00:17:27.358 "write": true, 00:17:27.358 "unmap": true, 00:17:27.358 "flush": true, 00:17:27.358 "reset": true, 00:17:27.358 "nvme_admin": false, 00:17:27.358 "nvme_io": false, 00:17:27.358 "nvme_io_md": false, 00:17:27.358 "write_zeroes": true, 00:17:27.358 "zcopy": true, 00:17:27.358 "get_zone_info": false, 00:17:27.358 "zone_management": false, 00:17:27.358 "zone_append": false, 00:17:27.358 "compare": false, 00:17:27.358 "compare_and_write": false, 00:17:27.358 "abort": true, 00:17:27.358 "seek_hole": false, 00:17:27.358 "seek_data": false, 00:17:27.358 "copy": true, 00:17:27.358 "nvme_iov_md": false 00:17:27.358 }, 00:17:27.358 "memory_domains": [ 00:17:27.358 { 00:17:27.358 "dma_device_id": "system", 00:17:27.358 "dma_device_type": 1 00:17:27.358 }, 00:17:27.358 { 00:17:27.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.358 "dma_device_type": 2 00:17:27.358 } 00:17:27.358 ], 00:17:27.358 "driver_specific": {} 00:17:27.358 }' 00:17:27.358 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.617 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.617 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:27.617 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.617 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.617 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.617 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.617 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.617 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:27.617 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.876 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.876 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:27.876 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.876 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:27.876 10:59:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:28.135 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:28.135 "name": "BaseBdev3", 00:17:28.135 "aliases": [ 00:17:28.135 "823d27bc-a237-40b6-981d-268d8bf4c350" 00:17:28.135 ], 00:17:28.135 "product_name": "Malloc disk", 00:17:28.135 "block_size": 512, 00:17:28.135 "num_blocks": 65536, 00:17:28.135 "uuid": "823d27bc-a237-40b6-981d-268d8bf4c350", 00:17:28.135 "assigned_rate_limits": { 00:17:28.135 "rw_ios_per_sec": 0, 00:17:28.135 "rw_mbytes_per_sec": 0, 00:17:28.135 "r_mbytes_per_sec": 0, 00:17:28.135 "w_mbytes_per_sec": 0 00:17:28.135 }, 00:17:28.135 "claimed": true, 00:17:28.135 "claim_type": "exclusive_write", 00:17:28.135 "zoned": false, 00:17:28.135 "supported_io_types": { 00:17:28.135 "read": true, 00:17:28.135 "write": true, 00:17:28.135 "unmap": true, 00:17:28.135 "flush": true, 00:17:28.135 "reset": true, 00:17:28.135 "nvme_admin": false, 00:17:28.135 "nvme_io": false, 00:17:28.135 "nvme_io_md": false, 00:17:28.135 "write_zeroes": true, 00:17:28.135 "zcopy": true, 00:17:28.135 "get_zone_info": false, 00:17:28.135 "zone_management": false, 00:17:28.135 "zone_append": false, 00:17:28.135 "compare": false, 00:17:28.135 "compare_and_write": false, 00:17:28.135 "abort": true, 00:17:28.135 "seek_hole": false, 00:17:28.135 "seek_data": false, 00:17:28.135 "copy": true, 00:17:28.135 "nvme_iov_md": false 00:17:28.135 }, 00:17:28.135 "memory_domains": [ 00:17:28.135 { 00:17:28.135 "dma_device_id": "system", 00:17:28.135 "dma_device_type": 1 00:17:28.135 }, 00:17:28.135 { 00:17:28.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.135 "dma_device_type": 2 00:17:28.135 } 00:17:28.135 ], 00:17:28.135 "driver_specific": {} 00:17:28.135 }' 00:17:28.135 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.135 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.135 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.135 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.135 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.136 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:28.136 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.136 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.395 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.395 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.395 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.395 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.395 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:28.655 [2024-07-25 10:59:35.539199] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:28.655 [2024-07-25 10:59:35.539236] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.655 [2024-07-25 10:59:35.539328] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.655 [2024-07-25 10:59:35.539394] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.655 [2024-07-25 10:59:35.539417] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:28.655 10:59:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3580186 00:17:28.655 10:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3580186 ']' 00:17:28.655 10:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3580186 00:17:28.655 10:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:17:28.655 10:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:28.655 10:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3580186 00:17:28.655 10:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:28.655 10:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:28.655 10:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3580186' 00:17:28.655 killing process with pid 3580186 00:17:28.655 10:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3580186 00:17:28.655 [2024-07-25 10:59:35.613968] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.655 10:59:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3580186 00:17:28.914 [2024-07-25 10:59:35.936534] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.882 10:59:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:30.882 00:17:30.882 real 0m29.666s 00:17:30.882 user 0m51.745s 00:17:30.882 sys 0m5.114s 00:17:30.882 10:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:30.882 10:59:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.882 ************************************ 00:17:30.882 END TEST raid_state_function_test 00:17:30.883 ************************************ 00:17:30.883 10:59:37 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:17:30.883 10:59:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:30.883 10:59:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:30.883 10:59:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.883 ************************************ 00:17:30.883 START TEST raid_state_function_test_sb 00:17:30.883 ************************************ 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3585803 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3585803' 00:17:30.883 Process raid pid: 3585803 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3585803 /var/tmp/spdk-raid.sock 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3585803 ']' 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:30.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.883 10:59:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:30.883 [2024-07-25 10:59:37.865676] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:30.883 [2024-07-25 10:59:37.865788] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.883 EAL: Requested device 0000:3d:01.0 cannot be used 00:17:30.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.883 EAL: Requested device 0000:3d:01.1 cannot be used 00:17:30.883 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:30.883 EAL: Requested device 0000:3d:01.2 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:01.3 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:01.4 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:01.5 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:01.6 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:01.7 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:02.0 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:02.1 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:02.2 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:02.3 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:02.4 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:02.5 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:02.6 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3d:02.7 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:01.0 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:01.1 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:01.2 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:01.3 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:01.4 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:01.5 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:01.6 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:01.7 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:02.0 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:02.1 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:02.2 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:02.3 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:02.4 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:02.5 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:02.6 cannot be used 00:17:31.143 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:17:31.143 EAL: Requested device 0000:3f:02.7 cannot be used 00:17:31.143 [2024-07-25 10:59:38.089971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.402 [2024-07-25 10:59:38.376369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.661 [2024-07-25 10:59:38.724516] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.661 [2024-07-25 10:59:38.724553] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.920 10:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:31.920 10:59:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:31.920 10:59:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:32.178 [2024-07-25 10:59:39.120556] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.178 [2024-07-25 10:59:39.120611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.178 [2024-07-25 10:59:39.120626] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.178 [2024-07-25 10:59:39.120642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.178 [2024-07-25 10:59:39.120653] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:32.178 [2024-07-25 10:59:39.120669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:32.178 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:32.178 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.178 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:32.178 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:32.178 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:32.178 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:32.178 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.178 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.178 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.178 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.178 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.178 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.436 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.436 "name": "Existed_Raid", 00:17:32.436 "uuid": "6b70868a-f8fb-4096-806d-21f1c521aecd", 00:17:32.436 "strip_size_kb": 64, 00:17:32.436 "state": "configuring", 00:17:32.436 "raid_level": "raid0", 00:17:32.436 "superblock": true, 00:17:32.436 "num_base_bdevs": 3, 00:17:32.436 "num_base_bdevs_discovered": 0, 00:17:32.436 "num_base_bdevs_operational": 3, 00:17:32.436 "base_bdevs_list": [ 00:17:32.436 { 00:17:32.436 "name": "BaseBdev1", 00:17:32.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.436 "is_configured": false, 00:17:32.436 "data_offset": 0, 00:17:32.436 "data_size": 0 00:17:32.436 }, 00:17:32.436 { 00:17:32.436 "name": "BaseBdev2", 00:17:32.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.436 "is_configured": false, 00:17:32.436 "data_offset": 0, 00:17:32.436 "data_size": 0 00:17:32.436 }, 00:17:32.436 { 00:17:32.436 "name": "BaseBdev3", 00:17:32.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.436 "is_configured": false, 00:17:32.436 "data_offset": 0, 00:17:32.436 "data_size": 0 00:17:32.436 } 00:17:32.436 ] 00:17:32.436 }' 00:17:32.436 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.437 10:59:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:33.004 10:59:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:33.263 [2024-07-25 10:59:40.155188] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.263 [2024-07-25 10:59:40.155231] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:33.263 10:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:33.263 [2024-07-25 10:59:40.379865] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:33.263 [2024-07-25 10:59:40.379914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:33.263 [2024-07-25 10:59:40.379928] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.263 [2024-07-25 10:59:40.379948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.263 [2024-07-25 10:59:40.379959] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:33.263 [2024-07-25 10:59:40.379974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:33.521 10:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:33.780 [2024-07-25 10:59:40.663539] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.780 BaseBdev1 00:17:33.780 10:59:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:33.780 10:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:33.780 10:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:33.780 10:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:33.780 10:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:33.780 10:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:33.780 10:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:34.039 10:59:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:34.039 [ 00:17:34.039 { 00:17:34.039 "name": "BaseBdev1", 00:17:34.039 "aliases": [ 00:17:34.039 "ec477ae8-7361-49ad-a444-676d2ed8de89" 00:17:34.039 ], 00:17:34.039 "product_name": "Malloc disk", 00:17:34.039 "block_size": 512, 00:17:34.039 "num_blocks": 65536, 00:17:34.039 "uuid": "ec477ae8-7361-49ad-a444-676d2ed8de89", 00:17:34.039 "assigned_rate_limits": { 00:17:34.039 "rw_ios_per_sec": 0, 00:17:34.039 "rw_mbytes_per_sec": 0, 00:17:34.039 "r_mbytes_per_sec": 0, 00:17:34.039 "w_mbytes_per_sec": 0 00:17:34.039 }, 00:17:34.039 "claimed": true, 00:17:34.039 "claim_type": "exclusive_write", 00:17:34.039 "zoned": false, 00:17:34.039 "supported_io_types": { 00:17:34.039 "read": true, 00:17:34.039 "write": true, 00:17:34.039 "unmap": true, 00:17:34.039 "flush": true, 00:17:34.039 "reset": true, 00:17:34.039 "nvme_admin": false, 00:17:34.039 "nvme_io": false, 00:17:34.039 "nvme_io_md": false, 00:17:34.039 "write_zeroes": true, 00:17:34.039 "zcopy": true, 00:17:34.039 "get_zone_info": false, 00:17:34.039 "zone_management": false, 00:17:34.039 "zone_append": false, 00:17:34.039 "compare": false, 00:17:34.039 "compare_and_write": false, 00:17:34.039 "abort": true, 00:17:34.039 "seek_hole": false, 00:17:34.039 "seek_data": false, 00:17:34.039 "copy": true, 00:17:34.039 "nvme_iov_md": false 00:17:34.039 }, 00:17:34.039 "memory_domains": [ 00:17:34.039 { 00:17:34.039 "dma_device_id": "system", 00:17:34.039 "dma_device_type": 1 00:17:34.039 }, 00:17:34.039 { 00:17:34.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.039 "dma_device_type": 2 00:17:34.039 } 00:17:34.039 ], 00:17:34.039 "driver_specific": {} 00:17:34.039 } 00:17:34.039 ] 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.039 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.298 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.298 "name": "Existed_Raid", 00:17:34.298 "uuid": "447429cd-24fd-49a3-876e-fc8d34ad4e45", 00:17:34.298 "strip_size_kb": 64, 00:17:34.298 "state": "configuring", 00:17:34.298 "raid_level": "raid0", 00:17:34.298 "superblock": true, 00:17:34.298 "num_base_bdevs": 3, 00:17:34.298 "num_base_bdevs_discovered": 1, 00:17:34.298 "num_base_bdevs_operational": 3, 00:17:34.298 "base_bdevs_list": [ 00:17:34.298 { 00:17:34.298 "name": "BaseBdev1", 00:17:34.298 "uuid": "ec477ae8-7361-49ad-a444-676d2ed8de89", 00:17:34.298 "is_configured": true, 00:17:34.298 "data_offset": 2048, 00:17:34.298 "data_size": 63488 00:17:34.298 }, 00:17:34.298 { 00:17:34.298 "name": "BaseBdev2", 00:17:34.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.298 "is_configured": false, 00:17:34.298 "data_offset": 0, 00:17:34.298 "data_size": 0 00:17:34.298 }, 00:17:34.298 { 00:17:34.298 "name": "BaseBdev3", 00:17:34.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.298 "is_configured": false, 00:17:34.298 "data_offset": 0, 00:17:34.298 "data_size": 0 00:17:34.298 } 00:17:34.298 ] 00:17:34.298 }' 00:17:34.298 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.298 10:59:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:34.865 10:59:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:35.123 [2024-07-25 10:59:42.119533] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:35.123 [2024-07-25 10:59:42.119587] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:35.123 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:35.381 [2024-07-25 10:59:42.348247] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.381 [2024-07-25 10:59:42.350545] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.381 [2024-07-25 10:59:42.350590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.381 [2024-07-25 10:59:42.350604] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:35.381 [2024-07-25 10:59:42.350621] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.381 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.638 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.638 "name": "Existed_Raid", 00:17:35.638 "uuid": "c30ba957-eab6-4e58-b9d8-cba74c3338b1", 00:17:35.638 "strip_size_kb": 64, 00:17:35.638 "state": "configuring", 00:17:35.638 "raid_level": "raid0", 00:17:35.638 "superblock": true, 00:17:35.638 "num_base_bdevs": 3, 00:17:35.638 "num_base_bdevs_discovered": 1, 00:17:35.638 "num_base_bdevs_operational": 3, 00:17:35.638 "base_bdevs_list": [ 00:17:35.638 { 00:17:35.638 "name": "BaseBdev1", 00:17:35.638 "uuid": "ec477ae8-7361-49ad-a444-676d2ed8de89", 00:17:35.638 "is_configured": true, 00:17:35.638 "data_offset": 2048, 00:17:35.638 "data_size": 63488 00:17:35.638 }, 00:17:35.638 { 00:17:35.638 "name": "BaseBdev2", 00:17:35.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.638 "is_configured": false, 00:17:35.638 "data_offset": 0, 00:17:35.638 "data_size": 0 00:17:35.638 }, 00:17:35.638 { 00:17:35.638 "name": "BaseBdev3", 00:17:35.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.638 "is_configured": false, 00:17:35.638 "data_offset": 0, 00:17:35.638 "data_size": 0 00:17:35.638 } 00:17:35.638 ] 00:17:35.638 }' 00:17:35.638 10:59:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.638 10:59:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:36.204 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:36.463 [2024-07-25 10:59:43.423216] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:36.463 BaseBdev2 00:17:36.463 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:36.463 10:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:36.463 10:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:36.463 10:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:36.463 10:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:36.463 10:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:36.463 10:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:36.722 10:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:36.981 [ 00:17:36.981 { 00:17:36.981 "name": "BaseBdev2", 00:17:36.981 "aliases": [ 00:17:36.981 "7e9ff7ea-0b2d-46c9-80d9-c12574522f3c" 00:17:36.981 ], 00:17:36.981 "product_name": "Malloc disk", 00:17:36.981 "block_size": 512, 00:17:36.981 "num_blocks": 65536, 00:17:36.981 "uuid": "7e9ff7ea-0b2d-46c9-80d9-c12574522f3c", 00:17:36.981 "assigned_rate_limits": { 00:17:36.981 "rw_ios_per_sec": 0, 00:17:36.981 "rw_mbytes_per_sec": 0, 00:17:36.981 "r_mbytes_per_sec": 0, 00:17:36.981 "w_mbytes_per_sec": 0 00:17:36.981 }, 00:17:36.981 "claimed": true, 00:17:36.981 "claim_type": "exclusive_write", 00:17:36.981 "zoned": false, 00:17:36.981 "supported_io_types": { 00:17:36.981 "read": true, 00:17:36.981 "write": true, 00:17:36.981 "unmap": true, 00:17:36.981 "flush": true, 00:17:36.981 "reset": true, 00:17:36.981 "nvme_admin": false, 00:17:36.981 "nvme_io": false, 00:17:36.981 "nvme_io_md": false, 00:17:36.981 "write_zeroes": true, 00:17:36.981 "zcopy": true, 00:17:36.981 "get_zone_info": false, 00:17:36.981 "zone_management": false, 00:17:36.981 "zone_append": false, 00:17:36.981 "compare": false, 00:17:36.981 "compare_and_write": false, 00:17:36.981 "abort": true, 00:17:36.981 "seek_hole": false, 00:17:36.981 "seek_data": false, 00:17:36.981 "copy": true, 00:17:36.981 "nvme_iov_md": false 00:17:36.981 }, 00:17:36.981 "memory_domains": [ 00:17:36.981 { 00:17:36.981 "dma_device_id": "system", 00:17:36.981 "dma_device_type": 1 00:17:36.981 }, 00:17:36.981 { 00:17:36.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.981 "dma_device_type": 2 00:17:36.981 } 00:17:36.981 ], 00:17:36.981 "driver_specific": {} 00:17:36.981 } 00:17:36.981 ] 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.981 10:59:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.241 10:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.241 "name": "Existed_Raid", 00:17:37.241 "uuid": "c30ba957-eab6-4e58-b9d8-cba74c3338b1", 00:17:37.241 "strip_size_kb": 64, 00:17:37.241 "state": "configuring", 00:17:37.241 "raid_level": "raid0", 00:17:37.241 "superblock": true, 00:17:37.241 "num_base_bdevs": 3, 00:17:37.241 "num_base_bdevs_discovered": 2, 00:17:37.241 "num_base_bdevs_operational": 3, 00:17:37.241 "base_bdevs_list": [ 00:17:37.241 { 00:17:37.241 "name": "BaseBdev1", 00:17:37.241 "uuid": "ec477ae8-7361-49ad-a444-676d2ed8de89", 00:17:37.241 "is_configured": true, 00:17:37.241 "data_offset": 2048, 00:17:37.241 "data_size": 63488 00:17:37.241 }, 00:17:37.241 { 00:17:37.241 "name": "BaseBdev2", 00:17:37.241 "uuid": "7e9ff7ea-0b2d-46c9-80d9-c12574522f3c", 00:17:37.241 "is_configured": true, 00:17:37.241 "data_offset": 2048, 00:17:37.241 "data_size": 63488 00:17:37.241 }, 00:17:37.241 { 00:17:37.241 "name": "BaseBdev3", 00:17:37.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.241 "is_configured": false, 00:17:37.241 "data_offset": 0, 00:17:37.241 "data_size": 0 00:17:37.241 } 00:17:37.241 ] 00:17:37.241 }' 00:17:37.241 10:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.241 10:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:37.808 10:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:38.066 [2024-07-25 10:59:44.959474] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:38.066 [2024-07-25 10:59:44.959735] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:38.066 [2024-07-25 10:59:44.959758] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:38.066 [2024-07-25 10:59:44.960083] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:17:38.066 [2024-07-25 10:59:44.960318] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:38.066 [2024-07-25 10:59:44.960334] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:38.066 BaseBdev3 00:17:38.067 [2024-07-25 10:59:44.960531] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.067 10:59:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:38.067 10:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:38.067 10:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:38.067 10:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:38.067 10:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:38.067 10:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:38.067 10:59:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:38.325 10:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:38.325 [ 00:17:38.325 { 00:17:38.325 "name": "BaseBdev3", 00:17:38.325 "aliases": [ 00:17:38.325 "7ef229cb-46a3-44b5-932f-4b93405fe4a0" 00:17:38.325 ], 00:17:38.326 "product_name": "Malloc disk", 00:17:38.326 "block_size": 512, 00:17:38.326 "num_blocks": 65536, 00:17:38.326 "uuid": "7ef229cb-46a3-44b5-932f-4b93405fe4a0", 00:17:38.326 "assigned_rate_limits": { 00:17:38.326 "rw_ios_per_sec": 0, 00:17:38.326 "rw_mbytes_per_sec": 0, 00:17:38.326 "r_mbytes_per_sec": 0, 00:17:38.326 "w_mbytes_per_sec": 0 00:17:38.326 }, 00:17:38.326 "claimed": true, 00:17:38.326 "claim_type": "exclusive_write", 00:17:38.326 "zoned": false, 00:17:38.326 "supported_io_types": { 00:17:38.326 "read": true, 00:17:38.326 "write": true, 00:17:38.326 "unmap": true, 00:17:38.326 "flush": true, 00:17:38.326 "reset": true, 00:17:38.326 "nvme_admin": false, 00:17:38.326 "nvme_io": false, 00:17:38.326 "nvme_io_md": false, 00:17:38.326 "write_zeroes": true, 00:17:38.326 "zcopy": true, 00:17:38.326 "get_zone_info": false, 00:17:38.326 "zone_management": false, 00:17:38.326 "zone_append": false, 00:17:38.326 "compare": false, 00:17:38.326 "compare_and_write": false, 00:17:38.326 "abort": true, 00:17:38.326 "seek_hole": false, 00:17:38.326 "seek_data": false, 00:17:38.326 "copy": true, 00:17:38.326 "nvme_iov_md": false 00:17:38.326 }, 00:17:38.326 "memory_domains": [ 00:17:38.326 { 00:17:38.326 "dma_device_id": "system", 00:17:38.326 "dma_device_type": 1 00:17:38.326 }, 00:17:38.326 { 00:17:38.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.326 "dma_device_type": 2 00:17:38.326 } 00:17:38.326 ], 00:17:38.326 "driver_specific": {} 00:17:38.326 } 00:17:38.326 ] 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.326 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.584 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.584 "name": "Existed_Raid", 00:17:38.584 "uuid": "c30ba957-eab6-4e58-b9d8-cba74c3338b1", 00:17:38.584 "strip_size_kb": 64, 00:17:38.584 "state": "online", 00:17:38.584 "raid_level": "raid0", 00:17:38.584 "superblock": true, 00:17:38.584 "num_base_bdevs": 3, 00:17:38.584 "num_base_bdevs_discovered": 3, 00:17:38.584 "num_base_bdevs_operational": 3, 00:17:38.584 "base_bdevs_list": [ 00:17:38.584 { 00:17:38.584 "name": "BaseBdev1", 00:17:38.584 "uuid": "ec477ae8-7361-49ad-a444-676d2ed8de89", 00:17:38.584 "is_configured": true, 00:17:38.584 "data_offset": 2048, 00:17:38.584 "data_size": 63488 00:17:38.584 }, 00:17:38.584 { 00:17:38.584 "name": "BaseBdev2", 00:17:38.584 "uuid": "7e9ff7ea-0b2d-46c9-80d9-c12574522f3c", 00:17:38.584 "is_configured": true, 00:17:38.584 "data_offset": 2048, 00:17:38.584 "data_size": 63488 00:17:38.584 }, 00:17:38.584 { 00:17:38.584 "name": "BaseBdev3", 00:17:38.584 "uuid": "7ef229cb-46a3-44b5-932f-4b93405fe4a0", 00:17:38.584 "is_configured": true, 00:17:38.584 "data_offset": 2048, 00:17:38.584 "data_size": 63488 00:17:38.584 } 00:17:38.584 ] 00:17:38.584 }' 00:17:38.584 10:59:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.584 10:59:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:39.150 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:39.150 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:39.150 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:39.150 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:39.150 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:39.150 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:39.150 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:39.150 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:39.409 [2024-07-25 10:59:46.427845] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.409 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:39.409 "name": "Existed_Raid", 00:17:39.409 "aliases": [ 00:17:39.409 "c30ba957-eab6-4e58-b9d8-cba74c3338b1" 00:17:39.409 ], 00:17:39.409 "product_name": "Raid Volume", 00:17:39.409 "block_size": 512, 00:17:39.409 "num_blocks": 190464, 00:17:39.409 "uuid": "c30ba957-eab6-4e58-b9d8-cba74c3338b1", 00:17:39.409 "assigned_rate_limits": { 00:17:39.409 "rw_ios_per_sec": 0, 00:17:39.409 "rw_mbytes_per_sec": 0, 00:17:39.409 "r_mbytes_per_sec": 0, 00:17:39.409 "w_mbytes_per_sec": 0 00:17:39.409 }, 00:17:39.409 "claimed": false, 00:17:39.409 "zoned": false, 00:17:39.409 "supported_io_types": { 00:17:39.409 "read": true, 00:17:39.409 "write": true, 00:17:39.409 "unmap": true, 00:17:39.409 "flush": true, 00:17:39.409 "reset": true, 00:17:39.409 "nvme_admin": false, 00:17:39.409 "nvme_io": false, 00:17:39.409 "nvme_io_md": false, 00:17:39.409 "write_zeroes": true, 00:17:39.409 "zcopy": false, 00:17:39.409 "get_zone_info": false, 00:17:39.409 "zone_management": false, 00:17:39.409 "zone_append": false, 00:17:39.409 "compare": false, 00:17:39.409 "compare_and_write": false, 00:17:39.409 "abort": false, 00:17:39.409 "seek_hole": false, 00:17:39.409 "seek_data": false, 00:17:39.409 "copy": false, 00:17:39.409 "nvme_iov_md": false 00:17:39.409 }, 00:17:39.409 "memory_domains": [ 00:17:39.409 { 00:17:39.409 "dma_device_id": "system", 00:17:39.409 "dma_device_type": 1 00:17:39.409 }, 00:17:39.409 { 00:17:39.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.409 "dma_device_type": 2 00:17:39.409 }, 00:17:39.409 { 00:17:39.409 "dma_device_id": "system", 00:17:39.409 "dma_device_type": 1 00:17:39.409 }, 00:17:39.409 { 00:17:39.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.409 "dma_device_type": 2 00:17:39.409 }, 00:17:39.409 { 00:17:39.409 "dma_device_id": "system", 00:17:39.409 "dma_device_type": 1 00:17:39.409 }, 00:17:39.409 { 00:17:39.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.409 "dma_device_type": 2 00:17:39.409 } 00:17:39.409 ], 00:17:39.409 "driver_specific": { 00:17:39.409 "raid": { 00:17:39.409 "uuid": "c30ba957-eab6-4e58-b9d8-cba74c3338b1", 00:17:39.409 "strip_size_kb": 64, 00:17:39.409 "state": "online", 00:17:39.409 "raid_level": "raid0", 00:17:39.409 "superblock": true, 00:17:39.409 "num_base_bdevs": 3, 00:17:39.409 "num_base_bdevs_discovered": 3, 00:17:39.409 "num_base_bdevs_operational": 3, 00:17:39.409 "base_bdevs_list": [ 00:17:39.409 { 00:17:39.409 "name": "BaseBdev1", 00:17:39.409 "uuid": "ec477ae8-7361-49ad-a444-676d2ed8de89", 00:17:39.409 "is_configured": true, 00:17:39.409 "data_offset": 2048, 00:17:39.409 "data_size": 63488 00:17:39.409 }, 00:17:39.409 { 00:17:39.409 "name": "BaseBdev2", 00:17:39.409 "uuid": "7e9ff7ea-0b2d-46c9-80d9-c12574522f3c", 00:17:39.409 "is_configured": true, 00:17:39.409 "data_offset": 2048, 00:17:39.409 "data_size": 63488 00:17:39.409 }, 00:17:39.409 { 00:17:39.409 "name": "BaseBdev3", 00:17:39.409 "uuid": "7ef229cb-46a3-44b5-932f-4b93405fe4a0", 00:17:39.409 "is_configured": true, 00:17:39.409 "data_offset": 2048, 00:17:39.409 "data_size": 63488 00:17:39.409 } 00:17:39.409 ] 00:17:39.409 } 00:17:39.409 } 00:17:39.409 }' 00:17:39.409 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.409 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:39.409 BaseBdev2 00:17:39.409 BaseBdev3' 00:17:39.409 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:39.409 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:39.409 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:39.667 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:39.667 "name": "BaseBdev1", 00:17:39.667 "aliases": [ 00:17:39.667 "ec477ae8-7361-49ad-a444-676d2ed8de89" 00:17:39.667 ], 00:17:39.667 "product_name": "Malloc disk", 00:17:39.667 "block_size": 512, 00:17:39.668 "num_blocks": 65536, 00:17:39.668 "uuid": "ec477ae8-7361-49ad-a444-676d2ed8de89", 00:17:39.668 "assigned_rate_limits": { 00:17:39.668 "rw_ios_per_sec": 0, 00:17:39.668 "rw_mbytes_per_sec": 0, 00:17:39.668 "r_mbytes_per_sec": 0, 00:17:39.668 "w_mbytes_per_sec": 0 00:17:39.668 }, 00:17:39.668 "claimed": true, 00:17:39.668 "claim_type": "exclusive_write", 00:17:39.668 "zoned": false, 00:17:39.668 "supported_io_types": { 00:17:39.668 "read": true, 00:17:39.668 "write": true, 00:17:39.668 "unmap": true, 00:17:39.668 "flush": true, 00:17:39.668 "reset": true, 00:17:39.668 "nvme_admin": false, 00:17:39.668 "nvme_io": false, 00:17:39.668 "nvme_io_md": false, 00:17:39.668 "write_zeroes": true, 00:17:39.668 "zcopy": true, 00:17:39.668 "get_zone_info": false, 00:17:39.668 "zone_management": false, 00:17:39.668 "zone_append": false, 00:17:39.668 "compare": false, 00:17:39.668 "compare_and_write": false, 00:17:39.668 "abort": true, 00:17:39.668 "seek_hole": false, 00:17:39.668 "seek_data": false, 00:17:39.668 "copy": true, 00:17:39.668 "nvme_iov_md": false 00:17:39.668 }, 00:17:39.668 "memory_domains": [ 00:17:39.668 { 00:17:39.668 "dma_device_id": "system", 00:17:39.668 "dma_device_type": 1 00:17:39.668 }, 00:17:39.668 { 00:17:39.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.668 "dma_device_type": 2 00:17:39.668 } 00:17:39.668 ], 00:17:39.668 "driver_specific": {} 00:17:39.668 }' 00:17:39.668 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.668 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.926 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:39.926 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.926 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.926 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:39.926 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.926 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.926 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:39.926 10:59:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.926 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:40.184 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:40.184 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:40.184 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:40.184 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:40.184 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:40.184 "name": "BaseBdev2", 00:17:40.184 "aliases": [ 00:17:40.184 "7e9ff7ea-0b2d-46c9-80d9-c12574522f3c" 00:17:40.184 ], 00:17:40.184 "product_name": "Malloc disk", 00:17:40.184 "block_size": 512, 00:17:40.184 "num_blocks": 65536, 00:17:40.184 "uuid": "7e9ff7ea-0b2d-46c9-80d9-c12574522f3c", 00:17:40.184 "assigned_rate_limits": { 00:17:40.184 "rw_ios_per_sec": 0, 00:17:40.184 "rw_mbytes_per_sec": 0, 00:17:40.184 "r_mbytes_per_sec": 0, 00:17:40.184 "w_mbytes_per_sec": 0 00:17:40.184 }, 00:17:40.184 "claimed": true, 00:17:40.184 "claim_type": "exclusive_write", 00:17:40.184 "zoned": false, 00:17:40.184 "supported_io_types": { 00:17:40.184 "read": true, 00:17:40.184 "write": true, 00:17:40.184 "unmap": true, 00:17:40.184 "flush": true, 00:17:40.184 "reset": true, 00:17:40.184 "nvme_admin": false, 00:17:40.184 "nvme_io": false, 00:17:40.184 "nvme_io_md": false, 00:17:40.184 "write_zeroes": true, 00:17:40.184 "zcopy": true, 00:17:40.184 "get_zone_info": false, 00:17:40.184 "zone_management": false, 00:17:40.184 "zone_append": false, 00:17:40.184 "compare": false, 00:17:40.184 "compare_and_write": false, 00:17:40.184 "abort": true, 00:17:40.184 "seek_hole": false, 00:17:40.184 "seek_data": false, 00:17:40.184 "copy": true, 00:17:40.184 "nvme_iov_md": false 00:17:40.184 }, 00:17:40.184 "memory_domains": [ 00:17:40.184 { 00:17:40.184 "dma_device_id": "system", 00:17:40.184 "dma_device_type": 1 00:17:40.184 }, 00:17:40.184 { 00:17:40.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.184 "dma_device_type": 2 00:17:40.184 } 00:17:40.184 ], 00:17:40.184 "driver_specific": {} 00:17:40.184 }' 00:17:40.184 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.442 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.442 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:40.442 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:40.442 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:40.442 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:40.442 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:40.442 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:40.442 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:40.442 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:40.700 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:40.700 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:40.700 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:40.700 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:40.700 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:40.958 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:40.958 "name": "BaseBdev3", 00:17:40.958 "aliases": [ 00:17:40.958 "7ef229cb-46a3-44b5-932f-4b93405fe4a0" 00:17:40.958 ], 00:17:40.958 "product_name": "Malloc disk", 00:17:40.958 "block_size": 512, 00:17:40.958 "num_blocks": 65536, 00:17:40.958 "uuid": "7ef229cb-46a3-44b5-932f-4b93405fe4a0", 00:17:40.958 "assigned_rate_limits": { 00:17:40.958 "rw_ios_per_sec": 0, 00:17:40.958 "rw_mbytes_per_sec": 0, 00:17:40.958 "r_mbytes_per_sec": 0, 00:17:40.958 "w_mbytes_per_sec": 0 00:17:40.958 }, 00:17:40.958 "claimed": true, 00:17:40.958 "claim_type": "exclusive_write", 00:17:40.958 "zoned": false, 00:17:40.958 "supported_io_types": { 00:17:40.958 "read": true, 00:17:40.958 "write": true, 00:17:40.958 "unmap": true, 00:17:40.958 "flush": true, 00:17:40.958 "reset": true, 00:17:40.958 "nvme_admin": false, 00:17:40.958 "nvme_io": false, 00:17:40.958 "nvme_io_md": false, 00:17:40.958 "write_zeroes": true, 00:17:40.958 "zcopy": true, 00:17:40.958 "get_zone_info": false, 00:17:40.958 "zone_management": false, 00:17:40.958 "zone_append": false, 00:17:40.958 "compare": false, 00:17:40.958 "compare_and_write": false, 00:17:40.958 "abort": true, 00:17:40.958 "seek_hole": false, 00:17:40.958 "seek_data": false, 00:17:40.958 "copy": true, 00:17:40.958 "nvme_iov_md": false 00:17:40.958 }, 00:17:40.958 "memory_domains": [ 00:17:40.958 { 00:17:40.958 "dma_device_id": "system", 00:17:40.958 "dma_device_type": 1 00:17:40.958 }, 00:17:40.958 { 00:17:40.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.958 "dma_device_type": 2 00:17:40.958 } 00:17:40.958 ], 00:17:40.958 "driver_specific": {} 00:17:40.958 }' 00:17:40.958 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.958 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.958 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:40.958 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:40.958 10:59:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:40.958 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:40.958 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:40.958 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.216 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:41.216 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.216 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.216 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:41.216 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:41.475 [2024-07-25 10:59:48.404996] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:41.475 [2024-07-25 10:59:48.405030] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.475 [2024-07-25 10:59:48.405093] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.475 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:41.733 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.733 "name": "Existed_Raid", 00:17:41.733 "uuid": "c30ba957-eab6-4e58-b9d8-cba74c3338b1", 00:17:41.733 "strip_size_kb": 64, 00:17:41.733 "state": "offline", 00:17:41.733 "raid_level": "raid0", 00:17:41.733 "superblock": true, 00:17:41.733 "num_base_bdevs": 3, 00:17:41.733 "num_base_bdevs_discovered": 2, 00:17:41.733 "num_base_bdevs_operational": 2, 00:17:41.733 "base_bdevs_list": [ 00:17:41.733 { 00:17:41.733 "name": null, 00:17:41.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.733 "is_configured": false, 00:17:41.733 "data_offset": 2048, 00:17:41.733 "data_size": 63488 00:17:41.733 }, 00:17:41.733 { 00:17:41.733 "name": "BaseBdev2", 00:17:41.733 "uuid": "7e9ff7ea-0b2d-46c9-80d9-c12574522f3c", 00:17:41.733 "is_configured": true, 00:17:41.733 "data_offset": 2048, 00:17:41.733 "data_size": 63488 00:17:41.733 }, 00:17:41.733 { 00:17:41.733 "name": "BaseBdev3", 00:17:41.733 "uuid": "7ef229cb-46a3-44b5-932f-4b93405fe4a0", 00:17:41.733 "is_configured": true, 00:17:41.733 "data_offset": 2048, 00:17:41.733 "data_size": 63488 00:17:41.733 } 00:17:41.733 ] 00:17:41.733 }' 00:17:41.733 10:59:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.733 10:59:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.299 10:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:42.299 10:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:42.299 10:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:42.299 10:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.557 10:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:42.557 10:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:42.557 10:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:42.816 [2024-07-25 10:59:49.699547] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:42.816 10:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:42.816 10:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:42.816 10:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.816 10:59:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:43.074 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:43.074 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:43.074 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:43.331 [2024-07-25 10:59:50.274468] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:43.331 [2024-07-25 10:59:50.274525] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:43.331 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:43.331 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:43.331 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.331 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:43.587 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:43.587 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:43.587 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:43.587 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:43.587 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:43.587 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:43.897 BaseBdev2 00:17:43.897 10:59:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:43.897 10:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:43.897 10:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:43.897 10:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:43.897 10:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:43.897 10:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:43.897 10:59:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:44.172 10:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:44.430 [ 00:17:44.430 { 00:17:44.430 "name": "BaseBdev2", 00:17:44.430 "aliases": [ 00:17:44.430 "6d240c55-9624-4dda-835b-4b35fd640bc9" 00:17:44.430 ], 00:17:44.430 "product_name": "Malloc disk", 00:17:44.430 "block_size": 512, 00:17:44.430 "num_blocks": 65536, 00:17:44.430 "uuid": "6d240c55-9624-4dda-835b-4b35fd640bc9", 00:17:44.430 "assigned_rate_limits": { 00:17:44.430 "rw_ios_per_sec": 0, 00:17:44.430 "rw_mbytes_per_sec": 0, 00:17:44.430 "r_mbytes_per_sec": 0, 00:17:44.430 "w_mbytes_per_sec": 0 00:17:44.430 }, 00:17:44.430 "claimed": false, 00:17:44.430 "zoned": false, 00:17:44.430 "supported_io_types": { 00:17:44.430 "read": true, 00:17:44.430 "write": true, 00:17:44.430 "unmap": true, 00:17:44.430 "flush": true, 00:17:44.430 "reset": true, 00:17:44.430 "nvme_admin": false, 00:17:44.430 "nvme_io": false, 00:17:44.430 "nvme_io_md": false, 00:17:44.430 "write_zeroes": true, 00:17:44.430 "zcopy": true, 00:17:44.430 "get_zone_info": false, 00:17:44.430 "zone_management": false, 00:17:44.430 "zone_append": false, 00:17:44.430 "compare": false, 00:17:44.430 "compare_and_write": false, 00:17:44.430 "abort": true, 00:17:44.431 "seek_hole": false, 00:17:44.431 "seek_data": false, 00:17:44.431 "copy": true, 00:17:44.431 "nvme_iov_md": false 00:17:44.431 }, 00:17:44.431 "memory_domains": [ 00:17:44.431 { 00:17:44.431 "dma_device_id": "system", 00:17:44.431 "dma_device_type": 1 00:17:44.431 }, 00:17:44.431 { 00:17:44.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.431 "dma_device_type": 2 00:17:44.431 } 00:17:44.431 ], 00:17:44.431 "driver_specific": {} 00:17:44.431 } 00:17:44.431 ] 00:17:44.431 10:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:44.431 10:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:44.431 10:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:44.431 10:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:44.688 BaseBdev3 00:17:44.688 10:59:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:44.688 10:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:44.688 10:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:44.688 10:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:44.688 10:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:44.688 10:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:44.688 10:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:44.946 10:59:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:45.204 [ 00:17:45.204 { 00:17:45.204 "name": "BaseBdev3", 00:17:45.205 "aliases": [ 00:17:45.205 "4a504cbc-ab3c-407e-b23e-462dab261676" 00:17:45.205 ], 00:17:45.205 "product_name": "Malloc disk", 00:17:45.205 "block_size": 512, 00:17:45.205 "num_blocks": 65536, 00:17:45.205 "uuid": "4a504cbc-ab3c-407e-b23e-462dab261676", 00:17:45.205 "assigned_rate_limits": { 00:17:45.205 "rw_ios_per_sec": 0, 00:17:45.205 "rw_mbytes_per_sec": 0, 00:17:45.205 "r_mbytes_per_sec": 0, 00:17:45.205 "w_mbytes_per_sec": 0 00:17:45.205 }, 00:17:45.205 "claimed": false, 00:17:45.205 "zoned": false, 00:17:45.205 "supported_io_types": { 00:17:45.205 "read": true, 00:17:45.205 "write": true, 00:17:45.205 "unmap": true, 00:17:45.205 "flush": true, 00:17:45.205 "reset": true, 00:17:45.205 "nvme_admin": false, 00:17:45.205 "nvme_io": false, 00:17:45.205 "nvme_io_md": false, 00:17:45.205 "write_zeroes": true, 00:17:45.205 "zcopy": true, 00:17:45.205 "get_zone_info": false, 00:17:45.205 "zone_management": false, 00:17:45.205 "zone_append": false, 00:17:45.205 "compare": false, 00:17:45.205 "compare_and_write": false, 00:17:45.205 "abort": true, 00:17:45.205 "seek_hole": false, 00:17:45.205 "seek_data": false, 00:17:45.205 "copy": true, 00:17:45.205 "nvme_iov_md": false 00:17:45.205 }, 00:17:45.205 "memory_domains": [ 00:17:45.205 { 00:17:45.205 "dma_device_id": "system", 00:17:45.205 "dma_device_type": 1 00:17:45.205 }, 00:17:45.205 { 00:17:45.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.205 "dma_device_type": 2 00:17:45.205 } 00:17:45.205 ], 00:17:45.205 "driver_specific": {} 00:17:45.205 } 00:17:45.205 ] 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:45.205 [2024-07-25 10:59:52.295160] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.205 [2024-07-25 10:59:52.295205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.205 [2024-07-25 10:59:52.295236] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:45.205 [2024-07-25 10:59:52.297532] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.205 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.464 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.464 "name": "Existed_Raid", 00:17:45.464 "uuid": "0d36637d-f5a1-4499-b768-d68e1286a2e6", 00:17:45.464 "strip_size_kb": 64, 00:17:45.464 "state": "configuring", 00:17:45.464 "raid_level": "raid0", 00:17:45.464 "superblock": true, 00:17:45.464 "num_base_bdevs": 3, 00:17:45.464 "num_base_bdevs_discovered": 2, 00:17:45.464 "num_base_bdevs_operational": 3, 00:17:45.464 "base_bdevs_list": [ 00:17:45.464 { 00:17:45.464 "name": "BaseBdev1", 00:17:45.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.464 "is_configured": false, 00:17:45.464 "data_offset": 0, 00:17:45.464 "data_size": 0 00:17:45.464 }, 00:17:45.464 { 00:17:45.464 "name": "BaseBdev2", 00:17:45.464 "uuid": "6d240c55-9624-4dda-835b-4b35fd640bc9", 00:17:45.464 "is_configured": true, 00:17:45.464 "data_offset": 2048, 00:17:45.464 "data_size": 63488 00:17:45.464 }, 00:17:45.464 { 00:17:45.464 "name": "BaseBdev3", 00:17:45.464 "uuid": "4a504cbc-ab3c-407e-b23e-462dab261676", 00:17:45.464 "is_configured": true, 00:17:45.464 "data_offset": 2048, 00:17:45.464 "data_size": 63488 00:17:45.464 } 00:17:45.464 ] 00:17:45.464 }' 00:17:45.464 10:59:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.464 10:59:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.030 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:46.287 [2024-07-25 10:59:53.317884] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:46.287 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:46.287 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:46.287 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:46.287 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:46.287 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.287 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:46.287 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.287 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.287 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.287 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.287 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.287 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.545 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.545 "name": "Existed_Raid", 00:17:46.545 "uuid": "0d36637d-f5a1-4499-b768-d68e1286a2e6", 00:17:46.545 "strip_size_kb": 64, 00:17:46.545 "state": "configuring", 00:17:46.545 "raid_level": "raid0", 00:17:46.545 "superblock": true, 00:17:46.545 "num_base_bdevs": 3, 00:17:46.545 "num_base_bdevs_discovered": 1, 00:17:46.545 "num_base_bdevs_operational": 3, 00:17:46.545 "base_bdevs_list": [ 00:17:46.545 { 00:17:46.545 "name": "BaseBdev1", 00:17:46.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.545 "is_configured": false, 00:17:46.545 "data_offset": 0, 00:17:46.545 "data_size": 0 00:17:46.545 }, 00:17:46.545 { 00:17:46.545 "name": null, 00:17:46.545 "uuid": "6d240c55-9624-4dda-835b-4b35fd640bc9", 00:17:46.545 "is_configured": false, 00:17:46.545 "data_offset": 2048, 00:17:46.545 "data_size": 63488 00:17:46.545 }, 00:17:46.545 { 00:17:46.545 "name": "BaseBdev3", 00:17:46.545 "uuid": "4a504cbc-ab3c-407e-b23e-462dab261676", 00:17:46.545 "is_configured": true, 00:17:46.545 "data_offset": 2048, 00:17:46.545 "data_size": 63488 00:17:46.545 } 00:17:46.545 ] 00:17:46.545 }' 00:17:46.545 10:59:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.545 10:59:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.111 10:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.111 10:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:47.368 10:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:47.368 10:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:47.626 [2024-07-25 10:59:54.621622] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.626 BaseBdev1 00:17:47.626 10:59:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:47.626 10:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:47.626 10:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:47.626 10:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:47.626 10:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:47.626 10:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:47.626 10:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.884 10:59:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:48.143 [ 00:17:48.143 { 00:17:48.143 "name": "BaseBdev1", 00:17:48.143 "aliases": [ 00:17:48.143 "7bcc7c65-460e-4ea5-bfde-35ba8c837772" 00:17:48.143 ], 00:17:48.143 "product_name": "Malloc disk", 00:17:48.143 "block_size": 512, 00:17:48.143 "num_blocks": 65536, 00:17:48.143 "uuid": "7bcc7c65-460e-4ea5-bfde-35ba8c837772", 00:17:48.143 "assigned_rate_limits": { 00:17:48.143 "rw_ios_per_sec": 0, 00:17:48.143 "rw_mbytes_per_sec": 0, 00:17:48.143 "r_mbytes_per_sec": 0, 00:17:48.143 "w_mbytes_per_sec": 0 00:17:48.143 }, 00:17:48.143 "claimed": true, 00:17:48.143 "claim_type": "exclusive_write", 00:17:48.143 "zoned": false, 00:17:48.143 "supported_io_types": { 00:17:48.143 "read": true, 00:17:48.143 "write": true, 00:17:48.143 "unmap": true, 00:17:48.143 "flush": true, 00:17:48.143 "reset": true, 00:17:48.143 "nvme_admin": false, 00:17:48.143 "nvme_io": false, 00:17:48.143 "nvme_io_md": false, 00:17:48.143 "write_zeroes": true, 00:17:48.143 "zcopy": true, 00:17:48.143 "get_zone_info": false, 00:17:48.143 "zone_management": false, 00:17:48.143 "zone_append": false, 00:17:48.143 "compare": false, 00:17:48.143 "compare_and_write": false, 00:17:48.143 "abort": true, 00:17:48.143 "seek_hole": false, 00:17:48.143 "seek_data": false, 00:17:48.143 "copy": true, 00:17:48.143 "nvme_iov_md": false 00:17:48.143 }, 00:17:48.143 "memory_domains": [ 00:17:48.143 { 00:17:48.143 "dma_device_id": "system", 00:17:48.143 "dma_device_type": 1 00:17:48.143 }, 00:17:48.143 { 00:17:48.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.143 "dma_device_type": 2 00:17:48.143 } 00:17:48.143 ], 00:17:48.143 "driver_specific": {} 00:17:48.143 } 00:17:48.143 ] 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.143 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.401 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.401 "name": "Existed_Raid", 00:17:48.401 "uuid": "0d36637d-f5a1-4499-b768-d68e1286a2e6", 00:17:48.401 "strip_size_kb": 64, 00:17:48.401 "state": "configuring", 00:17:48.401 "raid_level": "raid0", 00:17:48.401 "superblock": true, 00:17:48.401 "num_base_bdevs": 3, 00:17:48.401 "num_base_bdevs_discovered": 2, 00:17:48.401 "num_base_bdevs_operational": 3, 00:17:48.401 "base_bdevs_list": [ 00:17:48.401 { 00:17:48.401 "name": "BaseBdev1", 00:17:48.401 "uuid": "7bcc7c65-460e-4ea5-bfde-35ba8c837772", 00:17:48.401 "is_configured": true, 00:17:48.401 "data_offset": 2048, 00:17:48.401 "data_size": 63488 00:17:48.401 }, 00:17:48.401 { 00:17:48.401 "name": null, 00:17:48.401 "uuid": "6d240c55-9624-4dda-835b-4b35fd640bc9", 00:17:48.401 "is_configured": false, 00:17:48.401 "data_offset": 2048, 00:17:48.401 "data_size": 63488 00:17:48.401 }, 00:17:48.401 { 00:17:48.401 "name": "BaseBdev3", 00:17:48.401 "uuid": "4a504cbc-ab3c-407e-b23e-462dab261676", 00:17:48.401 "is_configured": true, 00:17:48.401 "data_offset": 2048, 00:17:48.401 "data_size": 63488 00:17:48.401 } 00:17:48.401 ] 00:17:48.401 }' 00:17:48.401 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.401 10:59:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.967 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:48.967 10:59:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.225 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:49.225 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:49.226 [2024-07-25 10:59:56.318301] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:49.226 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:49.226 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:49.226 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:49.226 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:49.226 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:49.226 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:49.226 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.226 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.226 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.226 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.226 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.226 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.484 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.484 "name": "Existed_Raid", 00:17:49.484 "uuid": "0d36637d-f5a1-4499-b768-d68e1286a2e6", 00:17:49.484 "strip_size_kb": 64, 00:17:49.484 "state": "configuring", 00:17:49.484 "raid_level": "raid0", 00:17:49.484 "superblock": true, 00:17:49.484 "num_base_bdevs": 3, 00:17:49.484 "num_base_bdevs_discovered": 1, 00:17:49.484 "num_base_bdevs_operational": 3, 00:17:49.484 "base_bdevs_list": [ 00:17:49.484 { 00:17:49.484 "name": "BaseBdev1", 00:17:49.484 "uuid": "7bcc7c65-460e-4ea5-bfde-35ba8c837772", 00:17:49.484 "is_configured": true, 00:17:49.484 "data_offset": 2048, 00:17:49.484 "data_size": 63488 00:17:49.484 }, 00:17:49.484 { 00:17:49.484 "name": null, 00:17:49.484 "uuid": "6d240c55-9624-4dda-835b-4b35fd640bc9", 00:17:49.484 "is_configured": false, 00:17:49.484 "data_offset": 2048, 00:17:49.484 "data_size": 63488 00:17:49.484 }, 00:17:49.484 { 00:17:49.484 "name": null, 00:17:49.484 "uuid": "4a504cbc-ab3c-407e-b23e-462dab261676", 00:17:49.484 "is_configured": false, 00:17:49.484 "data_offset": 2048, 00:17:49.484 "data_size": 63488 00:17:49.484 } 00:17:49.484 ] 00:17:49.484 }' 00:17:49.484 10:59:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.484 10:59:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.051 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.051 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:50.309 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:50.309 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:50.567 [2024-07-25 10:59:57.545628] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:50.567 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:50.567 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:50.567 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:50.567 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:50.567 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:50.567 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:50.567 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.567 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.567 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.567 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.567 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.567 10:59:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.133 10:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:51.133 "name": "Existed_Raid", 00:17:51.133 "uuid": "0d36637d-f5a1-4499-b768-d68e1286a2e6", 00:17:51.133 "strip_size_kb": 64, 00:17:51.133 "state": "configuring", 00:17:51.133 "raid_level": "raid0", 00:17:51.133 "superblock": true, 00:17:51.133 "num_base_bdevs": 3, 00:17:51.133 "num_base_bdevs_discovered": 2, 00:17:51.133 "num_base_bdevs_operational": 3, 00:17:51.133 "base_bdevs_list": [ 00:17:51.133 { 00:17:51.133 "name": "BaseBdev1", 00:17:51.133 "uuid": "7bcc7c65-460e-4ea5-bfde-35ba8c837772", 00:17:51.133 "is_configured": true, 00:17:51.133 "data_offset": 2048, 00:17:51.133 "data_size": 63488 00:17:51.133 }, 00:17:51.133 { 00:17:51.133 "name": null, 00:17:51.133 "uuid": "6d240c55-9624-4dda-835b-4b35fd640bc9", 00:17:51.133 "is_configured": false, 00:17:51.133 "data_offset": 2048, 00:17:51.133 "data_size": 63488 00:17:51.133 }, 00:17:51.133 { 00:17:51.133 "name": "BaseBdev3", 00:17:51.133 "uuid": "4a504cbc-ab3c-407e-b23e-462dab261676", 00:17:51.133 "is_configured": true, 00:17:51.133 "data_offset": 2048, 00:17:51.133 "data_size": 63488 00:17:51.133 } 00:17:51.133 ] 00:17:51.133 }' 00:17:51.133 10:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:51.133 10:59:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.700 10:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:51.700 10:59:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.266 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:52.266 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:52.525 [2024-07-25 10:59:59.406855] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:52.525 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:52.525 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:52.525 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.525 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:52.525 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:52.525 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:52.525 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.525 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.525 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.525 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.525 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.525 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.784 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.784 "name": "Existed_Raid", 00:17:52.784 "uuid": "0d36637d-f5a1-4499-b768-d68e1286a2e6", 00:17:52.784 "strip_size_kb": 64, 00:17:52.784 "state": "configuring", 00:17:52.784 "raid_level": "raid0", 00:17:52.784 "superblock": true, 00:17:52.784 "num_base_bdevs": 3, 00:17:52.784 "num_base_bdevs_discovered": 1, 00:17:52.784 "num_base_bdevs_operational": 3, 00:17:52.784 "base_bdevs_list": [ 00:17:52.784 { 00:17:52.784 "name": null, 00:17:52.784 "uuid": "7bcc7c65-460e-4ea5-bfde-35ba8c837772", 00:17:52.784 "is_configured": false, 00:17:52.784 "data_offset": 2048, 00:17:52.784 "data_size": 63488 00:17:52.784 }, 00:17:52.784 { 00:17:52.784 "name": null, 00:17:52.784 "uuid": "6d240c55-9624-4dda-835b-4b35fd640bc9", 00:17:52.784 "is_configured": false, 00:17:52.784 "data_offset": 2048, 00:17:52.784 "data_size": 63488 00:17:52.784 }, 00:17:52.784 { 00:17:52.784 "name": "BaseBdev3", 00:17:52.784 "uuid": "4a504cbc-ab3c-407e-b23e-462dab261676", 00:17:52.784 "is_configured": true, 00:17:52.784 "data_offset": 2048, 00:17:52.784 "data_size": 63488 00:17:52.784 } 00:17:52.784 ] 00:17:52.784 }' 00:17:52.784 10:59:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.784 10:59:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.350 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:53.350 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.608 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:53.608 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:53.867 [2024-07-25 11:00:00.742713] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:53.867 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:53.867 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.867 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.867 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:53.867 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.867 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:53.867 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.867 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.867 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.867 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.867 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.867 11:00:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.126 11:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:54.126 "name": "Existed_Raid", 00:17:54.126 "uuid": "0d36637d-f5a1-4499-b768-d68e1286a2e6", 00:17:54.126 "strip_size_kb": 64, 00:17:54.126 "state": "configuring", 00:17:54.126 "raid_level": "raid0", 00:17:54.126 "superblock": true, 00:17:54.126 "num_base_bdevs": 3, 00:17:54.126 "num_base_bdevs_discovered": 2, 00:17:54.126 "num_base_bdevs_operational": 3, 00:17:54.126 "base_bdevs_list": [ 00:17:54.126 { 00:17:54.126 "name": null, 00:17:54.126 "uuid": "7bcc7c65-460e-4ea5-bfde-35ba8c837772", 00:17:54.126 "is_configured": false, 00:17:54.126 "data_offset": 2048, 00:17:54.126 "data_size": 63488 00:17:54.126 }, 00:17:54.126 { 00:17:54.126 "name": "BaseBdev2", 00:17:54.126 "uuid": "6d240c55-9624-4dda-835b-4b35fd640bc9", 00:17:54.126 "is_configured": true, 00:17:54.126 "data_offset": 2048, 00:17:54.126 "data_size": 63488 00:17:54.126 }, 00:17:54.126 { 00:17:54.126 "name": "BaseBdev3", 00:17:54.126 "uuid": "4a504cbc-ab3c-407e-b23e-462dab261676", 00:17:54.126 "is_configured": true, 00:17:54.126 "data_offset": 2048, 00:17:54.126 "data_size": 63488 00:17:54.126 } 00:17:54.126 ] 00:17:54.126 }' 00:17:54.126 11:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:54.126 11:00:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.689 11:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.689 11:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:54.944 11:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:54.944 11:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.944 11:00:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:54.944 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7bcc7c65-460e-4ea5-bfde-35ba8c837772 00:17:55.200 [2024-07-25 11:00:02.312733] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:55.200 [2024-07-25 11:00:02.312980] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:55.200 [2024-07-25 11:00:02.313003] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:55.200 [2024-07-25 11:00:02.313311] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:17:55.200 [2024-07-25 11:00:02.313534] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:55.200 [2024-07-25 11:00:02.313549] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:55.201 NewBaseBdev 00:17:55.201 [2024-07-25 11:00:02.313736] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.460 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:55.460 11:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:55.460 11:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:55.461 11:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:55.461 11:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:55.461 11:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:55.461 11:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:55.461 11:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:55.719 [ 00:17:55.719 { 00:17:55.719 "name": "NewBaseBdev", 00:17:55.719 "aliases": [ 00:17:55.719 "7bcc7c65-460e-4ea5-bfde-35ba8c837772" 00:17:55.719 ], 00:17:55.719 "product_name": "Malloc disk", 00:17:55.719 "block_size": 512, 00:17:55.719 "num_blocks": 65536, 00:17:55.719 "uuid": "7bcc7c65-460e-4ea5-bfde-35ba8c837772", 00:17:55.719 "assigned_rate_limits": { 00:17:55.719 "rw_ios_per_sec": 0, 00:17:55.719 "rw_mbytes_per_sec": 0, 00:17:55.719 "r_mbytes_per_sec": 0, 00:17:55.719 "w_mbytes_per_sec": 0 00:17:55.719 }, 00:17:55.719 "claimed": true, 00:17:55.719 "claim_type": "exclusive_write", 00:17:55.719 "zoned": false, 00:17:55.719 "supported_io_types": { 00:17:55.719 "read": true, 00:17:55.719 "write": true, 00:17:55.719 "unmap": true, 00:17:55.719 "flush": true, 00:17:55.719 "reset": true, 00:17:55.719 "nvme_admin": false, 00:17:55.719 "nvme_io": false, 00:17:55.719 "nvme_io_md": false, 00:17:55.719 "write_zeroes": true, 00:17:55.719 "zcopy": true, 00:17:55.719 "get_zone_info": false, 00:17:55.719 "zone_management": false, 00:17:55.719 "zone_append": false, 00:17:55.719 "compare": false, 00:17:55.719 "compare_and_write": false, 00:17:55.719 "abort": true, 00:17:55.719 "seek_hole": false, 00:17:55.719 "seek_data": false, 00:17:55.719 "copy": true, 00:17:55.719 "nvme_iov_md": false 00:17:55.719 }, 00:17:55.719 "memory_domains": [ 00:17:55.719 { 00:17:55.719 "dma_device_id": "system", 00:17:55.719 "dma_device_type": 1 00:17:55.719 }, 00:17:55.719 { 00:17:55.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.719 "dma_device_type": 2 00:17:55.719 } 00:17:55.719 ], 00:17:55.719 "driver_specific": {} 00:17:55.719 } 00:17:55.719 ] 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.719 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.978 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.978 "name": "Existed_Raid", 00:17:55.978 "uuid": "0d36637d-f5a1-4499-b768-d68e1286a2e6", 00:17:55.978 "strip_size_kb": 64, 00:17:55.978 "state": "online", 00:17:55.978 "raid_level": "raid0", 00:17:55.978 "superblock": true, 00:17:55.978 "num_base_bdevs": 3, 00:17:55.978 "num_base_bdevs_discovered": 3, 00:17:55.978 "num_base_bdevs_operational": 3, 00:17:55.978 "base_bdevs_list": [ 00:17:55.978 { 00:17:55.978 "name": "NewBaseBdev", 00:17:55.978 "uuid": "7bcc7c65-460e-4ea5-bfde-35ba8c837772", 00:17:55.978 "is_configured": true, 00:17:55.978 "data_offset": 2048, 00:17:55.978 "data_size": 63488 00:17:55.978 }, 00:17:55.978 { 00:17:55.978 "name": "BaseBdev2", 00:17:55.978 "uuid": "6d240c55-9624-4dda-835b-4b35fd640bc9", 00:17:55.978 "is_configured": true, 00:17:55.978 "data_offset": 2048, 00:17:55.978 "data_size": 63488 00:17:55.978 }, 00:17:55.978 { 00:17:55.978 "name": "BaseBdev3", 00:17:55.978 "uuid": "4a504cbc-ab3c-407e-b23e-462dab261676", 00:17:55.978 "is_configured": true, 00:17:55.978 "data_offset": 2048, 00:17:55.978 "data_size": 63488 00:17:55.978 } 00:17:55.978 ] 00:17:55.978 }' 00:17:55.978 11:00:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.978 11:00:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.543 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:56.543 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:56.543 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:56.543 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:56.543 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:56.543 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:56.543 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:56.543 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:56.801 [2024-07-25 11:00:03.761095] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.801 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:56.801 "name": "Existed_Raid", 00:17:56.801 "aliases": [ 00:17:56.801 "0d36637d-f5a1-4499-b768-d68e1286a2e6" 00:17:56.801 ], 00:17:56.801 "product_name": "Raid Volume", 00:17:56.801 "block_size": 512, 00:17:56.801 "num_blocks": 190464, 00:17:56.801 "uuid": "0d36637d-f5a1-4499-b768-d68e1286a2e6", 00:17:56.801 "assigned_rate_limits": { 00:17:56.801 "rw_ios_per_sec": 0, 00:17:56.801 "rw_mbytes_per_sec": 0, 00:17:56.801 "r_mbytes_per_sec": 0, 00:17:56.801 "w_mbytes_per_sec": 0 00:17:56.801 }, 00:17:56.801 "claimed": false, 00:17:56.801 "zoned": false, 00:17:56.801 "supported_io_types": { 00:17:56.801 "read": true, 00:17:56.801 "write": true, 00:17:56.801 "unmap": true, 00:17:56.801 "flush": true, 00:17:56.801 "reset": true, 00:17:56.801 "nvme_admin": false, 00:17:56.801 "nvme_io": false, 00:17:56.801 "nvme_io_md": false, 00:17:56.801 "write_zeroes": true, 00:17:56.801 "zcopy": false, 00:17:56.801 "get_zone_info": false, 00:17:56.801 "zone_management": false, 00:17:56.801 "zone_append": false, 00:17:56.801 "compare": false, 00:17:56.801 "compare_and_write": false, 00:17:56.801 "abort": false, 00:17:56.801 "seek_hole": false, 00:17:56.801 "seek_data": false, 00:17:56.801 "copy": false, 00:17:56.801 "nvme_iov_md": false 00:17:56.801 }, 00:17:56.801 "memory_domains": [ 00:17:56.801 { 00:17:56.801 "dma_device_id": "system", 00:17:56.801 "dma_device_type": 1 00:17:56.801 }, 00:17:56.801 { 00:17:56.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.802 "dma_device_type": 2 00:17:56.802 }, 00:17:56.802 { 00:17:56.802 "dma_device_id": "system", 00:17:56.802 "dma_device_type": 1 00:17:56.802 }, 00:17:56.802 { 00:17:56.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.802 "dma_device_type": 2 00:17:56.802 }, 00:17:56.802 { 00:17:56.802 "dma_device_id": "system", 00:17:56.802 "dma_device_type": 1 00:17:56.802 }, 00:17:56.802 { 00:17:56.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.802 "dma_device_type": 2 00:17:56.802 } 00:17:56.802 ], 00:17:56.802 "driver_specific": { 00:17:56.802 "raid": { 00:17:56.802 "uuid": "0d36637d-f5a1-4499-b768-d68e1286a2e6", 00:17:56.802 "strip_size_kb": 64, 00:17:56.802 "state": "online", 00:17:56.802 "raid_level": "raid0", 00:17:56.802 "superblock": true, 00:17:56.802 "num_base_bdevs": 3, 00:17:56.802 "num_base_bdevs_discovered": 3, 00:17:56.802 "num_base_bdevs_operational": 3, 00:17:56.802 "base_bdevs_list": [ 00:17:56.802 { 00:17:56.802 "name": "NewBaseBdev", 00:17:56.802 "uuid": "7bcc7c65-460e-4ea5-bfde-35ba8c837772", 00:17:56.802 "is_configured": true, 00:17:56.802 "data_offset": 2048, 00:17:56.802 "data_size": 63488 00:17:56.802 }, 00:17:56.802 { 00:17:56.802 "name": "BaseBdev2", 00:17:56.802 "uuid": "6d240c55-9624-4dda-835b-4b35fd640bc9", 00:17:56.802 "is_configured": true, 00:17:56.802 "data_offset": 2048, 00:17:56.802 "data_size": 63488 00:17:56.802 }, 00:17:56.802 { 00:17:56.802 "name": "BaseBdev3", 00:17:56.802 "uuid": "4a504cbc-ab3c-407e-b23e-462dab261676", 00:17:56.802 "is_configured": true, 00:17:56.802 "data_offset": 2048, 00:17:56.802 "data_size": 63488 00:17:56.802 } 00:17:56.802 ] 00:17:56.802 } 00:17:56.802 } 00:17:56.802 }' 00:17:56.802 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:56.802 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:56.802 BaseBdev2 00:17:56.802 BaseBdev3' 00:17:56.802 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:56.802 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:56.802 11:00:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:57.060 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:57.060 "name": "NewBaseBdev", 00:17:57.060 "aliases": [ 00:17:57.060 "7bcc7c65-460e-4ea5-bfde-35ba8c837772" 00:17:57.060 ], 00:17:57.060 "product_name": "Malloc disk", 00:17:57.060 "block_size": 512, 00:17:57.060 "num_blocks": 65536, 00:17:57.060 "uuid": "7bcc7c65-460e-4ea5-bfde-35ba8c837772", 00:17:57.060 "assigned_rate_limits": { 00:17:57.060 "rw_ios_per_sec": 0, 00:17:57.060 "rw_mbytes_per_sec": 0, 00:17:57.060 "r_mbytes_per_sec": 0, 00:17:57.060 "w_mbytes_per_sec": 0 00:17:57.060 }, 00:17:57.060 "claimed": true, 00:17:57.060 "claim_type": "exclusive_write", 00:17:57.060 "zoned": false, 00:17:57.060 "supported_io_types": { 00:17:57.060 "read": true, 00:17:57.060 "write": true, 00:17:57.060 "unmap": true, 00:17:57.060 "flush": true, 00:17:57.060 "reset": true, 00:17:57.060 "nvme_admin": false, 00:17:57.060 "nvme_io": false, 00:17:57.060 "nvme_io_md": false, 00:17:57.060 "write_zeroes": true, 00:17:57.060 "zcopy": true, 00:17:57.060 "get_zone_info": false, 00:17:57.060 "zone_management": false, 00:17:57.060 "zone_append": false, 00:17:57.060 "compare": false, 00:17:57.060 "compare_and_write": false, 00:17:57.060 "abort": true, 00:17:57.060 "seek_hole": false, 00:17:57.060 "seek_data": false, 00:17:57.060 "copy": true, 00:17:57.060 "nvme_iov_md": false 00:17:57.060 }, 00:17:57.060 "memory_domains": [ 00:17:57.060 { 00:17:57.060 "dma_device_id": "system", 00:17:57.060 "dma_device_type": 1 00:17:57.060 }, 00:17:57.060 { 00:17:57.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.060 "dma_device_type": 2 00:17:57.060 } 00:17:57.060 ], 00:17:57.060 "driver_specific": {} 00:17:57.060 }' 00:17:57.060 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.060 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.060 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:57.060 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.317 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.317 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:57.317 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.317 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.317 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:57.317 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:57.317 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:57.317 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:57.317 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:57.317 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:57.317 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:57.576 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:57.576 "name": "BaseBdev2", 00:17:57.576 "aliases": [ 00:17:57.576 "6d240c55-9624-4dda-835b-4b35fd640bc9" 00:17:57.576 ], 00:17:57.576 "product_name": "Malloc disk", 00:17:57.576 "block_size": 512, 00:17:57.576 "num_blocks": 65536, 00:17:57.576 "uuid": "6d240c55-9624-4dda-835b-4b35fd640bc9", 00:17:57.576 "assigned_rate_limits": { 00:17:57.576 "rw_ios_per_sec": 0, 00:17:57.576 "rw_mbytes_per_sec": 0, 00:17:57.576 "r_mbytes_per_sec": 0, 00:17:57.576 "w_mbytes_per_sec": 0 00:17:57.576 }, 00:17:57.576 "claimed": true, 00:17:57.576 "claim_type": "exclusive_write", 00:17:57.576 "zoned": false, 00:17:57.576 "supported_io_types": { 00:17:57.576 "read": true, 00:17:57.576 "write": true, 00:17:57.576 "unmap": true, 00:17:57.576 "flush": true, 00:17:57.576 "reset": true, 00:17:57.576 "nvme_admin": false, 00:17:57.576 "nvme_io": false, 00:17:57.576 "nvme_io_md": false, 00:17:57.576 "write_zeroes": true, 00:17:57.576 "zcopy": true, 00:17:57.576 "get_zone_info": false, 00:17:57.576 "zone_management": false, 00:17:57.576 "zone_append": false, 00:17:57.576 "compare": false, 00:17:57.576 "compare_and_write": false, 00:17:57.576 "abort": true, 00:17:57.576 "seek_hole": false, 00:17:57.576 "seek_data": false, 00:17:57.576 "copy": true, 00:17:57.576 "nvme_iov_md": false 00:17:57.576 }, 00:17:57.576 "memory_domains": [ 00:17:57.576 { 00:17:57.576 "dma_device_id": "system", 00:17:57.576 "dma_device_type": 1 00:17:57.576 }, 00:17:57.576 { 00:17:57.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.576 "dma_device_type": 2 00:17:57.576 } 00:17:57.576 ], 00:17:57.576 "driver_specific": {} 00:17:57.576 }' 00:17:57.576 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.576 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:57.834 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:57.834 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.834 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:57.834 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:57.834 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.834 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:57.834 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:57.834 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:57.834 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.104 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:58.104 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:58.104 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:58.104 11:00:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:58.416 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:58.416 "name": "BaseBdev3", 00:17:58.416 "aliases": [ 00:17:58.416 "4a504cbc-ab3c-407e-b23e-462dab261676" 00:17:58.416 ], 00:17:58.416 "product_name": "Malloc disk", 00:17:58.416 "block_size": 512, 00:17:58.416 "num_blocks": 65536, 00:17:58.416 "uuid": "4a504cbc-ab3c-407e-b23e-462dab261676", 00:17:58.416 "assigned_rate_limits": { 00:17:58.416 "rw_ios_per_sec": 0, 00:17:58.416 "rw_mbytes_per_sec": 0, 00:17:58.416 "r_mbytes_per_sec": 0, 00:17:58.416 "w_mbytes_per_sec": 0 00:17:58.416 }, 00:17:58.416 "claimed": true, 00:17:58.416 "claim_type": "exclusive_write", 00:17:58.416 "zoned": false, 00:17:58.416 "supported_io_types": { 00:17:58.417 "read": true, 00:17:58.417 "write": true, 00:17:58.417 "unmap": true, 00:17:58.417 "flush": true, 00:17:58.417 "reset": true, 00:17:58.417 "nvme_admin": false, 00:17:58.417 "nvme_io": false, 00:17:58.417 "nvme_io_md": false, 00:17:58.417 "write_zeroes": true, 00:17:58.417 "zcopy": true, 00:17:58.417 "get_zone_info": false, 00:17:58.417 "zone_management": false, 00:17:58.417 "zone_append": false, 00:17:58.417 "compare": false, 00:17:58.417 "compare_and_write": false, 00:17:58.417 "abort": true, 00:17:58.417 "seek_hole": false, 00:17:58.417 "seek_data": false, 00:17:58.417 "copy": true, 00:17:58.417 "nvme_iov_md": false 00:17:58.417 }, 00:17:58.417 "memory_domains": [ 00:17:58.417 { 00:17:58.417 "dma_device_id": "system", 00:17:58.417 "dma_device_type": 1 00:17:58.417 }, 00:17:58.417 { 00:17:58.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.417 "dma_device_type": 2 00:17:58.417 } 00:17:58.417 ], 00:17:58.417 "driver_specific": {} 00:17:58.417 }' 00:17:58.417 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.417 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:58.417 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:58.417 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:58.417 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:58.417 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:58.417 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:58.417 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:58.417 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:58.417 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.417 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:58.674 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:58.674 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:58.674 [2024-07-25 11:00:05.770152] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:58.674 [2024-07-25 11:00:05.770186] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.674 [2024-07-25 11:00:05.770275] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.674 [2024-07-25 11:00:05.770343] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.674 [2024-07-25 11:00:05.770365] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:58.674 11:00:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3585803 00:17:58.674 11:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3585803 ']' 00:17:58.674 11:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3585803 00:17:58.674 11:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:58.932 11:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.932 11:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3585803 00:17:58.932 11:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:58.932 11:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:58.932 11:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3585803' 00:17:58.932 killing process with pid 3585803 00:17:58.932 11:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3585803 00:17:58.932 [2024-07-25 11:00:05.843481] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.932 11:00:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3585803 00:17:59.191 [2024-07-25 11:00:06.160537] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:01.096 11:00:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:01.096 00:18:01.096 real 0m30.107s 00:18:01.096 user 0m52.676s 00:18:01.096 sys 0m5.177s 00:18:01.096 11:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:01.096 11:00:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.096 ************************************ 00:18:01.096 END TEST raid_state_function_test_sb 00:18:01.096 ************************************ 00:18:01.096 11:00:07 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:18:01.096 11:00:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:01.096 11:00:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:01.096 11:00:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.096 ************************************ 00:18:01.096 START TEST raid_superblock_test 00:18:01.096 ************************************ 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3591966 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3591966 /var/tmp/spdk-raid.sock 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3591966 ']' 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:01.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:01.096 11:00:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:01.096 [2024-07-25 11:00:08.054114] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:01.096 [2024-07-25 11:00:08.054248] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3591966 ] 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.096 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:01.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.097 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:01.097 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:01.097 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:01.366 [2024-07-25 11:00:08.281213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.626 [2024-07-25 11:00:08.561283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.885 [2024-07-25 11:00:08.882284] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.885 [2024-07-25 11:00:08.882317] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:02.143 11:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:02.143 11:00:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:18:02.143 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:18:02.143 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:02.143 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:18:02.143 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:18:02.143 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:02.143 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:02.143 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:02.143 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:02.143 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:02.710 malloc1 00:18:02.710 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:02.710 [2024-07-25 11:00:09.820024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:02.710 [2024-07-25 11:00:09.820086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.710 [2024-07-25 11:00:09.820117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:18:02.710 [2024-07-25 11:00:09.820133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.710 [2024-07-25 11:00:09.822892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.710 [2024-07-25 11:00:09.822927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:02.710 pt1 00:18:02.969 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:02.969 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:02.969 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:18:02.969 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:18:02.969 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:02.969 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:02.969 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:02.969 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:02.969 11:00:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:03.227 malloc2 00:18:03.227 11:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:03.227 [2024-07-25 11:00:10.328025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:03.227 [2024-07-25 11:00:10.328087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.227 [2024-07-25 11:00:10.328115] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:18:03.227 [2024-07-25 11:00:10.328131] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.227 [2024-07-25 11:00:10.330937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.227 [2024-07-25 11:00:10.330977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:03.227 pt2 00:18:03.485 11:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:03.485 11:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:03.485 11:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:18:03.485 11:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:18:03.485 11:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:03.485 11:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:03.485 11:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:18:03.485 11:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:03.485 11:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:04.052 malloc3 00:18:04.052 11:00:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:04.310 [2024-07-25 11:00:11.384392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:04.310 [2024-07-25 11:00:11.384455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.310 [2024-07-25 11:00:11.384486] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:18:04.310 [2024-07-25 11:00:11.384502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.310 [2024-07-25 11:00:11.387231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.310 [2024-07-25 11:00:11.387265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:04.310 pt3 00:18:04.310 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:18:04.310 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:18:04.310 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:04.569 [2024-07-25 11:00:11.621086] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:04.569 [2024-07-25 11:00:11.623409] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.569 [2024-07-25 11:00:11.623497] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:04.569 [2024-07-25 11:00:11.623692] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:04.569 [2024-07-25 11:00:11.623714] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:04.569 [2024-07-25 11:00:11.624067] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:18:04.569 [2024-07-25 11:00:11.624318] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:04.569 [2024-07-25 11:00:11.624334] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:04.569 [2024-07-25 11:00:11.624548] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.569 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:04.569 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:04.569 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:04.569 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:04.569 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:04.569 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:04.569 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.569 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.569 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.569 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.569 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.569 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.828 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:04.828 "name": "raid_bdev1", 00:18:04.828 "uuid": "762c8700-8650-49b2-9484-1a626dd89e20", 00:18:04.828 "strip_size_kb": 64, 00:18:04.828 "state": "online", 00:18:04.828 "raid_level": "raid0", 00:18:04.828 "superblock": true, 00:18:04.828 "num_base_bdevs": 3, 00:18:04.828 "num_base_bdevs_discovered": 3, 00:18:04.828 "num_base_bdevs_operational": 3, 00:18:04.828 "base_bdevs_list": [ 00:18:04.828 { 00:18:04.828 "name": "pt1", 00:18:04.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.828 "is_configured": true, 00:18:04.828 "data_offset": 2048, 00:18:04.828 "data_size": 63488 00:18:04.828 }, 00:18:04.828 { 00:18:04.828 "name": "pt2", 00:18:04.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.828 "is_configured": true, 00:18:04.828 "data_offset": 2048, 00:18:04.828 "data_size": 63488 00:18:04.828 }, 00:18:04.828 { 00:18:04.828 "name": "pt3", 00:18:04.828 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:04.828 "is_configured": true, 00:18:04.828 "data_offset": 2048, 00:18:04.828 "data_size": 63488 00:18:04.828 } 00:18:04.828 ] 00:18:04.828 }' 00:18:04.828 11:00:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:04.828 11:00:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:05.394 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:18:05.394 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:05.394 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:05.394 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:05.394 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:05.394 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:05.394 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:05.394 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:05.651 [2024-07-25 11:00:12.624165] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.651 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:05.651 "name": "raid_bdev1", 00:18:05.651 "aliases": [ 00:18:05.651 "762c8700-8650-49b2-9484-1a626dd89e20" 00:18:05.651 ], 00:18:05.651 "product_name": "Raid Volume", 00:18:05.651 "block_size": 512, 00:18:05.651 "num_blocks": 190464, 00:18:05.651 "uuid": "762c8700-8650-49b2-9484-1a626dd89e20", 00:18:05.651 "assigned_rate_limits": { 00:18:05.651 "rw_ios_per_sec": 0, 00:18:05.651 "rw_mbytes_per_sec": 0, 00:18:05.651 "r_mbytes_per_sec": 0, 00:18:05.651 "w_mbytes_per_sec": 0 00:18:05.651 }, 00:18:05.651 "claimed": false, 00:18:05.651 "zoned": false, 00:18:05.651 "supported_io_types": { 00:18:05.651 "read": true, 00:18:05.651 "write": true, 00:18:05.651 "unmap": true, 00:18:05.651 "flush": true, 00:18:05.651 "reset": true, 00:18:05.651 "nvme_admin": false, 00:18:05.651 "nvme_io": false, 00:18:05.651 "nvme_io_md": false, 00:18:05.651 "write_zeroes": true, 00:18:05.651 "zcopy": false, 00:18:05.651 "get_zone_info": false, 00:18:05.651 "zone_management": false, 00:18:05.651 "zone_append": false, 00:18:05.651 "compare": false, 00:18:05.651 "compare_and_write": false, 00:18:05.651 "abort": false, 00:18:05.651 "seek_hole": false, 00:18:05.651 "seek_data": false, 00:18:05.651 "copy": false, 00:18:05.651 "nvme_iov_md": false 00:18:05.651 }, 00:18:05.651 "memory_domains": [ 00:18:05.651 { 00:18:05.651 "dma_device_id": "system", 00:18:05.651 "dma_device_type": 1 00:18:05.651 }, 00:18:05.651 { 00:18:05.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.651 "dma_device_type": 2 00:18:05.651 }, 00:18:05.651 { 00:18:05.651 "dma_device_id": "system", 00:18:05.651 "dma_device_type": 1 00:18:05.651 }, 00:18:05.651 { 00:18:05.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.651 "dma_device_type": 2 00:18:05.651 }, 00:18:05.651 { 00:18:05.651 "dma_device_id": "system", 00:18:05.651 "dma_device_type": 1 00:18:05.651 }, 00:18:05.651 { 00:18:05.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.651 "dma_device_type": 2 00:18:05.651 } 00:18:05.651 ], 00:18:05.651 "driver_specific": { 00:18:05.651 "raid": { 00:18:05.651 "uuid": "762c8700-8650-49b2-9484-1a626dd89e20", 00:18:05.651 "strip_size_kb": 64, 00:18:05.651 "state": "online", 00:18:05.651 "raid_level": "raid0", 00:18:05.651 "superblock": true, 00:18:05.651 "num_base_bdevs": 3, 00:18:05.651 "num_base_bdevs_discovered": 3, 00:18:05.651 "num_base_bdevs_operational": 3, 00:18:05.651 "base_bdevs_list": [ 00:18:05.651 { 00:18:05.651 "name": "pt1", 00:18:05.651 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.651 "is_configured": true, 00:18:05.651 "data_offset": 2048, 00:18:05.651 "data_size": 63488 00:18:05.651 }, 00:18:05.651 { 00:18:05.651 "name": "pt2", 00:18:05.651 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.651 "is_configured": true, 00:18:05.651 "data_offset": 2048, 00:18:05.651 "data_size": 63488 00:18:05.651 }, 00:18:05.651 { 00:18:05.651 "name": "pt3", 00:18:05.651 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:05.651 "is_configured": true, 00:18:05.651 "data_offset": 2048, 00:18:05.651 "data_size": 63488 00:18:05.651 } 00:18:05.651 ] 00:18:05.651 } 00:18:05.651 } 00:18:05.651 }' 00:18:05.651 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:05.651 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:05.651 pt2 00:18:05.651 pt3' 00:18:05.651 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:05.651 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:05.651 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:05.909 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:05.909 "name": "pt1", 00:18:05.909 "aliases": [ 00:18:05.909 "00000000-0000-0000-0000-000000000001" 00:18:05.909 ], 00:18:05.909 "product_name": "passthru", 00:18:05.909 "block_size": 512, 00:18:05.909 "num_blocks": 65536, 00:18:05.909 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.909 "assigned_rate_limits": { 00:18:05.909 "rw_ios_per_sec": 0, 00:18:05.909 "rw_mbytes_per_sec": 0, 00:18:05.909 "r_mbytes_per_sec": 0, 00:18:05.909 "w_mbytes_per_sec": 0 00:18:05.909 }, 00:18:05.909 "claimed": true, 00:18:05.909 "claim_type": "exclusive_write", 00:18:05.909 "zoned": false, 00:18:05.909 "supported_io_types": { 00:18:05.909 "read": true, 00:18:05.909 "write": true, 00:18:05.909 "unmap": true, 00:18:05.909 "flush": true, 00:18:05.909 "reset": true, 00:18:05.909 "nvme_admin": false, 00:18:05.909 "nvme_io": false, 00:18:05.909 "nvme_io_md": false, 00:18:05.909 "write_zeroes": true, 00:18:05.909 "zcopy": true, 00:18:05.909 "get_zone_info": false, 00:18:05.909 "zone_management": false, 00:18:05.909 "zone_append": false, 00:18:05.909 "compare": false, 00:18:05.909 "compare_and_write": false, 00:18:05.909 "abort": true, 00:18:05.909 "seek_hole": false, 00:18:05.909 "seek_data": false, 00:18:05.909 "copy": true, 00:18:05.909 "nvme_iov_md": false 00:18:05.909 }, 00:18:05.909 "memory_domains": [ 00:18:05.909 { 00:18:05.909 "dma_device_id": "system", 00:18:05.909 "dma_device_type": 1 00:18:05.909 }, 00:18:05.909 { 00:18:05.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.909 "dma_device_type": 2 00:18:05.909 } 00:18:05.909 ], 00:18:05.909 "driver_specific": { 00:18:05.909 "passthru": { 00:18:05.909 "name": "pt1", 00:18:05.909 "base_bdev_name": "malloc1" 00:18:05.909 } 00:18:05.909 } 00:18:05.909 }' 00:18:05.909 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.909 11:00:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.909 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:05.909 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.167 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.167 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.167 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.167 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.167 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.167 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.167 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.167 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.167 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.167 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:06.167 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.423 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.423 "name": "pt2", 00:18:06.423 "aliases": [ 00:18:06.423 "00000000-0000-0000-0000-000000000002" 00:18:06.423 ], 00:18:06.423 "product_name": "passthru", 00:18:06.423 "block_size": 512, 00:18:06.423 "num_blocks": 65536, 00:18:06.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.423 "assigned_rate_limits": { 00:18:06.423 "rw_ios_per_sec": 0, 00:18:06.423 "rw_mbytes_per_sec": 0, 00:18:06.423 "r_mbytes_per_sec": 0, 00:18:06.423 "w_mbytes_per_sec": 0 00:18:06.423 }, 00:18:06.423 "claimed": true, 00:18:06.423 "claim_type": "exclusive_write", 00:18:06.423 "zoned": false, 00:18:06.423 "supported_io_types": { 00:18:06.423 "read": true, 00:18:06.423 "write": true, 00:18:06.423 "unmap": true, 00:18:06.423 "flush": true, 00:18:06.423 "reset": true, 00:18:06.424 "nvme_admin": false, 00:18:06.424 "nvme_io": false, 00:18:06.424 "nvme_io_md": false, 00:18:06.424 "write_zeroes": true, 00:18:06.424 "zcopy": true, 00:18:06.424 "get_zone_info": false, 00:18:06.424 "zone_management": false, 00:18:06.424 "zone_append": false, 00:18:06.424 "compare": false, 00:18:06.424 "compare_and_write": false, 00:18:06.424 "abort": true, 00:18:06.424 "seek_hole": false, 00:18:06.424 "seek_data": false, 00:18:06.424 "copy": true, 00:18:06.424 "nvme_iov_md": false 00:18:06.424 }, 00:18:06.424 "memory_domains": [ 00:18:06.424 { 00:18:06.424 "dma_device_id": "system", 00:18:06.424 "dma_device_type": 1 00:18:06.424 }, 00:18:06.424 { 00:18:06.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.424 "dma_device_type": 2 00:18:06.424 } 00:18:06.424 ], 00:18:06.424 "driver_specific": { 00:18:06.424 "passthru": { 00:18:06.424 "name": "pt2", 00:18:06.424 "base_bdev_name": "malloc2" 00:18:06.424 } 00:18:06.424 } 00:18:06.424 }' 00:18:06.424 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.424 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:06.681 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:06.681 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.681 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:06.681 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:06.681 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.681 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:06.681 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:06.681 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.681 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:06.938 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:06.938 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:06.938 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:06.938 11:00:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:06.938 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:06.938 "name": "pt3", 00:18:06.938 "aliases": [ 00:18:06.938 "00000000-0000-0000-0000-000000000003" 00:18:06.938 ], 00:18:06.938 "product_name": "passthru", 00:18:06.938 "block_size": 512, 00:18:06.938 "num_blocks": 65536, 00:18:06.938 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:06.938 "assigned_rate_limits": { 00:18:06.938 "rw_ios_per_sec": 0, 00:18:06.938 "rw_mbytes_per_sec": 0, 00:18:06.938 "r_mbytes_per_sec": 0, 00:18:06.938 "w_mbytes_per_sec": 0 00:18:06.938 }, 00:18:06.938 "claimed": true, 00:18:06.938 "claim_type": "exclusive_write", 00:18:06.938 "zoned": false, 00:18:06.938 "supported_io_types": { 00:18:06.938 "read": true, 00:18:06.938 "write": true, 00:18:06.938 "unmap": true, 00:18:06.938 "flush": true, 00:18:06.938 "reset": true, 00:18:06.938 "nvme_admin": false, 00:18:06.938 "nvme_io": false, 00:18:06.938 "nvme_io_md": false, 00:18:06.938 "write_zeroes": true, 00:18:06.938 "zcopy": true, 00:18:06.938 "get_zone_info": false, 00:18:06.938 "zone_management": false, 00:18:06.938 "zone_append": false, 00:18:06.938 "compare": false, 00:18:06.938 "compare_and_write": false, 00:18:06.938 "abort": true, 00:18:06.938 "seek_hole": false, 00:18:06.938 "seek_data": false, 00:18:06.938 "copy": true, 00:18:06.938 "nvme_iov_md": false 00:18:06.938 }, 00:18:06.938 "memory_domains": [ 00:18:06.938 { 00:18:06.938 "dma_device_id": "system", 00:18:06.938 "dma_device_type": 1 00:18:06.938 }, 00:18:06.938 { 00:18:06.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.938 "dma_device_type": 2 00:18:06.938 } 00:18:06.938 ], 00:18:06.938 "driver_specific": { 00:18:06.938 "passthru": { 00:18:06.938 "name": "pt3", 00:18:06.938 "base_bdev_name": "malloc3" 00:18:06.939 } 00:18:06.939 } 00:18:06.939 }' 00:18:06.939 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.196 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:07.197 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:07.197 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.197 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:07.197 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:07.197 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.197 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:07.197 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:07.197 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.454 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:07.454 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:07.454 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:07.454 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:18:07.711 [2024-07-25 11:00:14.581465] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.711 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=762c8700-8650-49b2-9484-1a626dd89e20 00:18:07.711 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 762c8700-8650-49b2-9484-1a626dd89e20 ']' 00:18:07.711 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:07.711 [2024-07-25 11:00:14.809684] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.711 [2024-07-25 11:00:14.809718] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.711 [2024-07-25 11:00:14.809800] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.711 [2024-07-25 11:00:14.809871] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.711 [2024-07-25 11:00:14.809888] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:07.968 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.968 11:00:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:18:07.968 11:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:18:07.968 11:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:18:07.968 11:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:07.968 11:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:08.226 11:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:08.226 11:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:08.483 11:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:18:08.484 11:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:08.741 11:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:08.741 11:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:18:08.999 11:00:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:18:09.255 [2024-07-25 11:00:16.165271] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:09.255 [2024-07-25 11:00:16.167601] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:09.256 [2024-07-25 11:00:16.167664] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:09.256 [2024-07-25 11:00:16.167721] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:09.256 [2024-07-25 11:00:16.167779] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:09.256 [2024-07-25 11:00:16.167808] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:09.256 [2024-07-25 11:00:16.167832] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.256 [2024-07-25 11:00:16.167846] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:09.256 request: 00:18:09.256 { 00:18:09.256 "name": "raid_bdev1", 00:18:09.256 "raid_level": "raid0", 00:18:09.256 "base_bdevs": [ 00:18:09.256 "malloc1", 00:18:09.256 "malloc2", 00:18:09.256 "malloc3" 00:18:09.256 ], 00:18:09.256 "strip_size_kb": 64, 00:18:09.256 "superblock": false, 00:18:09.256 "method": "bdev_raid_create", 00:18:09.256 "req_id": 1 00:18:09.256 } 00:18:09.256 Got JSON-RPC error response 00:18:09.256 response: 00:18:09.256 { 00:18:09.256 "code": -17, 00:18:09.256 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:09.256 } 00:18:09.256 11:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:18:09.256 11:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:09.256 11:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:09.256 11:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:09.256 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.256 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:09.513 [2024-07-25 11:00:16.598365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:09.513 [2024-07-25 11:00:16.598421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.513 [2024-07-25 11:00:16.598448] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:18:09.513 [2024-07-25 11:00:16.598463] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.513 [2024-07-25 11:00:16.601225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.513 [2024-07-25 11:00:16.601259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:09.513 [2024-07-25 11:00:16.601350] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:09.513 [2024-07-25 11:00:16.601425] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:09.513 pt1 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.513 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.771 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.771 "name": "raid_bdev1", 00:18:09.771 "uuid": "762c8700-8650-49b2-9484-1a626dd89e20", 00:18:09.771 "strip_size_kb": 64, 00:18:09.771 "state": "configuring", 00:18:09.771 "raid_level": "raid0", 00:18:09.771 "superblock": true, 00:18:09.771 "num_base_bdevs": 3, 00:18:09.771 "num_base_bdevs_discovered": 1, 00:18:09.771 "num_base_bdevs_operational": 3, 00:18:09.771 "base_bdevs_list": [ 00:18:09.771 { 00:18:09.771 "name": "pt1", 00:18:09.771 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:09.771 "is_configured": true, 00:18:09.771 "data_offset": 2048, 00:18:09.771 "data_size": 63488 00:18:09.771 }, 00:18:09.771 { 00:18:09.771 "name": null, 00:18:09.771 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.771 "is_configured": false, 00:18:09.771 "data_offset": 2048, 00:18:09.771 "data_size": 63488 00:18:09.771 }, 00:18:09.771 { 00:18:09.771 "name": null, 00:18:09.771 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:09.771 "is_configured": false, 00:18:09.771 "data_offset": 2048, 00:18:09.771 "data_size": 63488 00:18:09.771 } 00:18:09.771 ] 00:18:09.771 }' 00:18:09.771 11:00:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.771 11:00:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.336 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:18:10.336 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:10.594 [2024-07-25 11:00:17.593027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:10.594 [2024-07-25 11:00:17.593100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.594 [2024-07-25 11:00:17.593128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:18:10.594 [2024-07-25 11:00:17.593152] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.594 [2024-07-25 11:00:17.593711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.594 [2024-07-25 11:00:17.593735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:10.594 [2024-07-25 11:00:17.593823] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:10.594 [2024-07-25 11:00:17.593849] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.594 pt2 00:18:10.594 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:10.852 [2024-07-25 11:00:17.821685] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:10.852 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:18:10.852 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:10.852 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:10.852 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:10.852 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:10.852 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:10.852 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.852 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.852 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.852 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.852 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.852 11:00:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.109 11:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.109 "name": "raid_bdev1", 00:18:11.109 "uuid": "762c8700-8650-49b2-9484-1a626dd89e20", 00:18:11.109 "strip_size_kb": 64, 00:18:11.109 "state": "configuring", 00:18:11.109 "raid_level": "raid0", 00:18:11.109 "superblock": true, 00:18:11.109 "num_base_bdevs": 3, 00:18:11.109 "num_base_bdevs_discovered": 1, 00:18:11.109 "num_base_bdevs_operational": 3, 00:18:11.109 "base_bdevs_list": [ 00:18:11.109 { 00:18:11.109 "name": "pt1", 00:18:11.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:11.109 "is_configured": true, 00:18:11.109 "data_offset": 2048, 00:18:11.109 "data_size": 63488 00:18:11.109 }, 00:18:11.109 { 00:18:11.109 "name": null, 00:18:11.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:11.109 "is_configured": false, 00:18:11.109 "data_offset": 2048, 00:18:11.109 "data_size": 63488 00:18:11.109 }, 00:18:11.109 { 00:18:11.109 "name": null, 00:18:11.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:11.109 "is_configured": false, 00:18:11.109 "data_offset": 2048, 00:18:11.109 "data_size": 63488 00:18:11.109 } 00:18:11.109 ] 00:18:11.109 }' 00:18:11.109 11:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.109 11:00:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.673 11:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:18:11.673 11:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:11.673 11:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:11.930 [2024-07-25 11:00:18.848474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:11.930 [2024-07-25 11:00:18.848539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.930 [2024-07-25 11:00:18.848563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042380 00:18:11.930 [2024-07-25 11:00:18.848581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.930 [2024-07-25 11:00:18.849125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.930 [2024-07-25 11:00:18.849162] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:11.931 [2024-07-25 11:00:18.849249] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:11.931 [2024-07-25 11:00:18.849279] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.931 pt2 00:18:11.931 11:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:18:11.931 11:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:11.931 11:00:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:12.218 [2024-07-25 11:00:19.077082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:12.218 [2024-07-25 11:00:19.077151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.218 [2024-07-25 11:00:19.077175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:18:12.218 [2024-07-25 11:00:19.077192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.218 [2024-07-25 11:00:19.077754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.218 [2024-07-25 11:00:19.077781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:12.218 [2024-07-25 11:00:19.077872] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:12.218 [2024-07-25 11:00:19.077907] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:12.218 [2024-07-25 11:00:19.078078] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:12.218 [2024-07-25 11:00:19.078095] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:12.218 [2024-07-25 11:00:19.078405] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:18:12.218 [2024-07-25 11:00:19.078634] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:12.218 [2024-07-25 11:00:19.078648] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:12.218 [2024-07-25 11:00:19.078823] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.218 pt3 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.218 "name": "raid_bdev1", 00:18:12.218 "uuid": "762c8700-8650-49b2-9484-1a626dd89e20", 00:18:12.218 "strip_size_kb": 64, 00:18:12.218 "state": "online", 00:18:12.218 "raid_level": "raid0", 00:18:12.218 "superblock": true, 00:18:12.218 "num_base_bdevs": 3, 00:18:12.218 "num_base_bdevs_discovered": 3, 00:18:12.218 "num_base_bdevs_operational": 3, 00:18:12.218 "base_bdevs_list": [ 00:18:12.218 { 00:18:12.218 "name": "pt1", 00:18:12.218 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:12.218 "is_configured": true, 00:18:12.218 "data_offset": 2048, 00:18:12.218 "data_size": 63488 00:18:12.218 }, 00:18:12.218 { 00:18:12.218 "name": "pt2", 00:18:12.218 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:12.218 "is_configured": true, 00:18:12.218 "data_offset": 2048, 00:18:12.218 "data_size": 63488 00:18:12.218 }, 00:18:12.218 { 00:18:12.218 "name": "pt3", 00:18:12.218 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:12.218 "is_configured": true, 00:18:12.218 "data_offset": 2048, 00:18:12.218 "data_size": 63488 00:18:12.218 } 00:18:12.218 ] 00:18:12.218 }' 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.218 11:00:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.798 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:18:12.798 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:12.798 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:12.798 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:12.799 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:12.799 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:12.799 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:12.799 11:00:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:13.056 [2024-07-25 11:00:20.056067] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.056 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:13.056 "name": "raid_bdev1", 00:18:13.056 "aliases": [ 00:18:13.056 "762c8700-8650-49b2-9484-1a626dd89e20" 00:18:13.056 ], 00:18:13.056 "product_name": "Raid Volume", 00:18:13.056 "block_size": 512, 00:18:13.056 "num_blocks": 190464, 00:18:13.056 "uuid": "762c8700-8650-49b2-9484-1a626dd89e20", 00:18:13.056 "assigned_rate_limits": { 00:18:13.056 "rw_ios_per_sec": 0, 00:18:13.056 "rw_mbytes_per_sec": 0, 00:18:13.056 "r_mbytes_per_sec": 0, 00:18:13.056 "w_mbytes_per_sec": 0 00:18:13.056 }, 00:18:13.056 "claimed": false, 00:18:13.056 "zoned": false, 00:18:13.056 "supported_io_types": { 00:18:13.056 "read": true, 00:18:13.056 "write": true, 00:18:13.056 "unmap": true, 00:18:13.056 "flush": true, 00:18:13.056 "reset": true, 00:18:13.056 "nvme_admin": false, 00:18:13.056 "nvme_io": false, 00:18:13.056 "nvme_io_md": false, 00:18:13.056 "write_zeroes": true, 00:18:13.056 "zcopy": false, 00:18:13.056 "get_zone_info": false, 00:18:13.056 "zone_management": false, 00:18:13.057 "zone_append": false, 00:18:13.057 "compare": false, 00:18:13.057 "compare_and_write": false, 00:18:13.057 "abort": false, 00:18:13.057 "seek_hole": false, 00:18:13.057 "seek_data": false, 00:18:13.057 "copy": false, 00:18:13.057 "nvme_iov_md": false 00:18:13.057 }, 00:18:13.057 "memory_domains": [ 00:18:13.057 { 00:18:13.057 "dma_device_id": "system", 00:18:13.057 "dma_device_type": 1 00:18:13.057 }, 00:18:13.057 { 00:18:13.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.057 "dma_device_type": 2 00:18:13.057 }, 00:18:13.057 { 00:18:13.057 "dma_device_id": "system", 00:18:13.057 "dma_device_type": 1 00:18:13.057 }, 00:18:13.057 { 00:18:13.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.057 "dma_device_type": 2 00:18:13.057 }, 00:18:13.057 { 00:18:13.057 "dma_device_id": "system", 00:18:13.057 "dma_device_type": 1 00:18:13.057 }, 00:18:13.057 { 00:18:13.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.057 "dma_device_type": 2 00:18:13.057 } 00:18:13.057 ], 00:18:13.057 "driver_specific": { 00:18:13.057 "raid": { 00:18:13.057 "uuid": "762c8700-8650-49b2-9484-1a626dd89e20", 00:18:13.057 "strip_size_kb": 64, 00:18:13.057 "state": "online", 00:18:13.057 "raid_level": "raid0", 00:18:13.057 "superblock": true, 00:18:13.057 "num_base_bdevs": 3, 00:18:13.057 "num_base_bdevs_discovered": 3, 00:18:13.057 "num_base_bdevs_operational": 3, 00:18:13.057 "base_bdevs_list": [ 00:18:13.057 { 00:18:13.057 "name": "pt1", 00:18:13.057 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:13.057 "is_configured": true, 00:18:13.057 "data_offset": 2048, 00:18:13.057 "data_size": 63488 00:18:13.057 }, 00:18:13.057 { 00:18:13.057 "name": "pt2", 00:18:13.057 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.057 "is_configured": true, 00:18:13.057 "data_offset": 2048, 00:18:13.057 "data_size": 63488 00:18:13.057 }, 00:18:13.057 { 00:18:13.057 "name": "pt3", 00:18:13.057 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:13.057 "is_configured": true, 00:18:13.057 "data_offset": 2048, 00:18:13.057 "data_size": 63488 00:18:13.057 } 00:18:13.057 ] 00:18:13.057 } 00:18:13.057 } 00:18:13.057 }' 00:18:13.057 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:13.057 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:13.057 pt2 00:18:13.057 pt3' 00:18:13.057 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:13.057 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:13.057 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:13.314 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:13.314 "name": "pt1", 00:18:13.314 "aliases": [ 00:18:13.314 "00000000-0000-0000-0000-000000000001" 00:18:13.314 ], 00:18:13.314 "product_name": "passthru", 00:18:13.314 "block_size": 512, 00:18:13.314 "num_blocks": 65536, 00:18:13.314 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:13.314 "assigned_rate_limits": { 00:18:13.314 "rw_ios_per_sec": 0, 00:18:13.315 "rw_mbytes_per_sec": 0, 00:18:13.315 "r_mbytes_per_sec": 0, 00:18:13.315 "w_mbytes_per_sec": 0 00:18:13.315 }, 00:18:13.315 "claimed": true, 00:18:13.315 "claim_type": "exclusive_write", 00:18:13.315 "zoned": false, 00:18:13.315 "supported_io_types": { 00:18:13.315 "read": true, 00:18:13.315 "write": true, 00:18:13.315 "unmap": true, 00:18:13.315 "flush": true, 00:18:13.315 "reset": true, 00:18:13.315 "nvme_admin": false, 00:18:13.315 "nvme_io": false, 00:18:13.315 "nvme_io_md": false, 00:18:13.315 "write_zeroes": true, 00:18:13.315 "zcopy": true, 00:18:13.315 "get_zone_info": false, 00:18:13.315 "zone_management": false, 00:18:13.315 "zone_append": false, 00:18:13.315 "compare": false, 00:18:13.315 "compare_and_write": false, 00:18:13.315 "abort": true, 00:18:13.315 "seek_hole": false, 00:18:13.315 "seek_data": false, 00:18:13.315 "copy": true, 00:18:13.315 "nvme_iov_md": false 00:18:13.315 }, 00:18:13.315 "memory_domains": [ 00:18:13.315 { 00:18:13.315 "dma_device_id": "system", 00:18:13.315 "dma_device_type": 1 00:18:13.315 }, 00:18:13.315 { 00:18:13.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.315 "dma_device_type": 2 00:18:13.315 } 00:18:13.315 ], 00:18:13.315 "driver_specific": { 00:18:13.315 "passthru": { 00:18:13.315 "name": "pt1", 00:18:13.315 "base_bdev_name": "malloc1" 00:18:13.315 } 00:18:13.315 } 00:18:13.315 }' 00:18:13.315 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.315 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.315 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:13.315 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.578 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:13.578 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:13.578 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.578 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:13.578 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:13.578 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.578 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:13.578 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:13.578 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:13.578 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:13.578 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:13.855 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:13.855 "name": "pt2", 00:18:13.855 "aliases": [ 00:18:13.855 "00000000-0000-0000-0000-000000000002" 00:18:13.855 ], 00:18:13.855 "product_name": "passthru", 00:18:13.855 "block_size": 512, 00:18:13.855 "num_blocks": 65536, 00:18:13.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:13.855 "assigned_rate_limits": { 00:18:13.855 "rw_ios_per_sec": 0, 00:18:13.855 "rw_mbytes_per_sec": 0, 00:18:13.855 "r_mbytes_per_sec": 0, 00:18:13.855 "w_mbytes_per_sec": 0 00:18:13.855 }, 00:18:13.855 "claimed": true, 00:18:13.855 "claim_type": "exclusive_write", 00:18:13.855 "zoned": false, 00:18:13.855 "supported_io_types": { 00:18:13.855 "read": true, 00:18:13.855 "write": true, 00:18:13.855 "unmap": true, 00:18:13.855 "flush": true, 00:18:13.855 "reset": true, 00:18:13.855 "nvme_admin": false, 00:18:13.855 "nvme_io": false, 00:18:13.855 "nvme_io_md": false, 00:18:13.855 "write_zeroes": true, 00:18:13.855 "zcopy": true, 00:18:13.855 "get_zone_info": false, 00:18:13.855 "zone_management": false, 00:18:13.855 "zone_append": false, 00:18:13.855 "compare": false, 00:18:13.855 "compare_and_write": false, 00:18:13.855 "abort": true, 00:18:13.855 "seek_hole": false, 00:18:13.855 "seek_data": false, 00:18:13.855 "copy": true, 00:18:13.855 "nvme_iov_md": false 00:18:13.855 }, 00:18:13.855 "memory_domains": [ 00:18:13.855 { 00:18:13.855 "dma_device_id": "system", 00:18:13.855 "dma_device_type": 1 00:18:13.855 }, 00:18:13.855 { 00:18:13.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.855 "dma_device_type": 2 00:18:13.855 } 00:18:13.855 ], 00:18:13.855 "driver_specific": { 00:18:13.855 "passthru": { 00:18:13.855 "name": "pt2", 00:18:13.855 "base_bdev_name": "malloc2" 00:18:13.855 } 00:18:13.855 } 00:18:13.855 }' 00:18:13.855 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:13.855 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.113 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:14.113 11:00:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.113 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.113 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:14.113 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.113 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.113 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:14.113 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.113 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.375 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:14.375 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:14.375 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:14.375 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:14.375 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:14.375 "name": "pt3", 00:18:14.375 "aliases": [ 00:18:14.375 "00000000-0000-0000-0000-000000000003" 00:18:14.375 ], 00:18:14.375 "product_name": "passthru", 00:18:14.375 "block_size": 512, 00:18:14.375 "num_blocks": 65536, 00:18:14.375 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:14.375 "assigned_rate_limits": { 00:18:14.375 "rw_ios_per_sec": 0, 00:18:14.375 "rw_mbytes_per_sec": 0, 00:18:14.375 "r_mbytes_per_sec": 0, 00:18:14.375 "w_mbytes_per_sec": 0 00:18:14.375 }, 00:18:14.375 "claimed": true, 00:18:14.375 "claim_type": "exclusive_write", 00:18:14.375 "zoned": false, 00:18:14.375 "supported_io_types": { 00:18:14.375 "read": true, 00:18:14.375 "write": true, 00:18:14.375 "unmap": true, 00:18:14.375 "flush": true, 00:18:14.375 "reset": true, 00:18:14.375 "nvme_admin": false, 00:18:14.375 "nvme_io": false, 00:18:14.375 "nvme_io_md": false, 00:18:14.375 "write_zeroes": true, 00:18:14.375 "zcopy": true, 00:18:14.375 "get_zone_info": false, 00:18:14.375 "zone_management": false, 00:18:14.375 "zone_append": false, 00:18:14.375 "compare": false, 00:18:14.375 "compare_and_write": false, 00:18:14.375 "abort": true, 00:18:14.375 "seek_hole": false, 00:18:14.375 "seek_data": false, 00:18:14.375 "copy": true, 00:18:14.375 "nvme_iov_md": false 00:18:14.375 }, 00:18:14.375 "memory_domains": [ 00:18:14.375 { 00:18:14.375 "dma_device_id": "system", 00:18:14.375 "dma_device_type": 1 00:18:14.375 }, 00:18:14.375 { 00:18:14.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.375 "dma_device_type": 2 00:18:14.375 } 00:18:14.375 ], 00:18:14.375 "driver_specific": { 00:18:14.375 "passthru": { 00:18:14.375 "name": "pt3", 00:18:14.375 "base_bdev_name": "malloc3" 00:18:14.375 } 00:18:14.375 } 00:18:14.375 }' 00:18:14.375 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.632 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:14.632 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:14.632 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.632 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:14.632 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:14.632 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.632 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:14.632 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:14.632 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.890 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:14.890 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:14.890 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:14.890 11:00:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:18:15.148 [2024-07-25 11:00:22.017410] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 762c8700-8650-49b2-9484-1a626dd89e20 '!=' 762c8700-8650-49b2-9484-1a626dd89e20 ']' 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3591966 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3591966 ']' 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3591966 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3591966 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3591966' 00:18:15.148 killing process with pid 3591966 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3591966 00:18:15.148 [2024-07-25 11:00:22.096392] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:15.148 [2024-07-25 11:00:22.096490] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.148 11:00:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3591966 00:18:15.148 [2024-07-25 11:00:22.096559] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:15.148 [2024-07-25 11:00:22.096583] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:15.405 [2024-07-25 11:00:22.422757] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.312 11:00:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:18:17.312 00:18:17.312 real 0m16.184s 00:18:17.312 user 0m27.266s 00:18:17.312 sys 0m2.728s 00:18:17.312 11:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:17.312 11:00:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.312 ************************************ 00:18:17.312 END TEST raid_superblock_test 00:18:17.312 ************************************ 00:18:17.312 11:00:24 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:18:17.312 11:00:24 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:17.312 11:00:24 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:17.312 11:00:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.312 ************************************ 00:18:17.312 START TEST raid_read_error_test 00:18:17.312 ************************************ 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.eCjAFNkXZb 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3595010 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3595010 /var/tmp/spdk-raid.sock 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3595010 ']' 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:17.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.312 11:00:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.312 [2024-07-25 11:00:24.341708] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:17.312 [2024-07-25 11:00:24.341830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3595010 ] 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:17.569 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.569 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:17.570 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.570 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:17.570 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.570 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:17.570 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.570 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:17.570 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.570 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:17.570 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.570 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:17.570 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.570 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:17.570 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.570 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:17.570 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.570 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:17.570 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.570 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:17.570 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.570 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:17.570 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:17.570 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:17.570 [2024-07-25 11:00:24.571122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.826 [2024-07-25 11:00:24.833446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.083 [2024-07-25 11:00:25.163197] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.083 [2024-07-25 11:00:25.163235] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:18.339 11:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.339 11:00:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:18.339 11:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:18.339 11:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:18.597 BaseBdev1_malloc 00:18:18.597 11:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:18.854 true 00:18:18.854 11:00:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:19.111 [2024-07-25 11:00:26.067376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:19.111 [2024-07-25 11:00:26.067442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.111 [2024-07-25 11:00:26.067467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:18:19.111 [2024-07-25 11:00:26.067490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.111 [2024-07-25 11:00:26.070240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.111 [2024-07-25 11:00:26.070277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:19.111 BaseBdev1 00:18:19.111 11:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:19.111 11:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:19.368 BaseBdev2_malloc 00:18:19.368 11:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:19.626 true 00:18:19.626 11:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:19.884 [2024-07-25 11:00:26.805461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:19.884 [2024-07-25 11:00:26.805528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:19.884 [2024-07-25 11:00:26.805555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:18:19.884 [2024-07-25 11:00:26.805577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:19.884 [2024-07-25 11:00:26.808386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:19.884 [2024-07-25 11:00:26.808425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:19.884 BaseBdev2 00:18:19.884 11:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:19.884 11:00:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:20.141 BaseBdev3_malloc 00:18:20.141 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:20.399 true 00:18:20.399 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:20.657 [2024-07-25 11:00:27.545001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:20.657 [2024-07-25 11:00:27.545062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.657 [2024-07-25 11:00:27.545088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:18:20.657 [2024-07-25 11:00:27.545107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.657 [2024-07-25 11:00:27.547863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.657 [2024-07-25 11:00:27.547900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:20.657 BaseBdev3 00:18:20.657 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:20.657 [2024-07-25 11:00:27.773653] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.915 [2024-07-25 11:00:27.775982] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.915 [2024-07-25 11:00:27.776070] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:20.915 [2024-07-25 11:00:27.776338] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:20.915 [2024-07-25 11:00:27.776368] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:20.915 [2024-07-25 11:00:27.776687] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:18:20.915 [2024-07-25 11:00:27.776922] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:20.915 [2024-07-25 11:00:27.776944] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:20.915 [2024-07-25 11:00:27.777158] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.915 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:20.915 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:20.915 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:20.915 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:20.915 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:20.915 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:20.915 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:20.915 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:20.915 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:20.915 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:20.915 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.915 11:00:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.915 11:00:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:20.915 "name": "raid_bdev1", 00:18:20.915 "uuid": "78db2d9e-b9c8-4c27-8ef9-b816005cf743", 00:18:20.915 "strip_size_kb": 64, 00:18:20.915 "state": "online", 00:18:20.915 "raid_level": "raid0", 00:18:20.915 "superblock": true, 00:18:20.915 "num_base_bdevs": 3, 00:18:20.915 "num_base_bdevs_discovered": 3, 00:18:20.915 "num_base_bdevs_operational": 3, 00:18:20.915 "base_bdevs_list": [ 00:18:20.915 { 00:18:20.915 "name": "BaseBdev1", 00:18:20.915 "uuid": "b6b7841f-42e1-52c2-8e80-727f3a2cf9e1", 00:18:20.915 "is_configured": true, 00:18:20.915 "data_offset": 2048, 00:18:20.915 "data_size": 63488 00:18:20.915 }, 00:18:20.915 { 00:18:20.915 "name": "BaseBdev2", 00:18:20.915 "uuid": "123fdffd-a07b-5d7a-a109-599af05a859f", 00:18:20.915 "is_configured": true, 00:18:20.915 "data_offset": 2048, 00:18:20.915 "data_size": 63488 00:18:20.915 }, 00:18:20.915 { 00:18:20.915 "name": "BaseBdev3", 00:18:20.915 "uuid": "908b325d-60d8-5a92-a7d2-ae824c9f83f1", 00:18:20.915 "is_configured": true, 00:18:20.915 "data_offset": 2048, 00:18:20.915 "data_size": 63488 00:18:20.915 } 00:18:20.915 ] 00:18:20.915 }' 00:18:20.915 11:00:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:20.915 11:00:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.481 11:00:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:18:21.481 11:00:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:21.739 [2024-07-25 11:00:28.698174] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:18:22.669 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.927 11:00:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.185 11:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.185 "name": "raid_bdev1", 00:18:23.185 "uuid": "78db2d9e-b9c8-4c27-8ef9-b816005cf743", 00:18:23.185 "strip_size_kb": 64, 00:18:23.185 "state": "online", 00:18:23.185 "raid_level": "raid0", 00:18:23.185 "superblock": true, 00:18:23.185 "num_base_bdevs": 3, 00:18:23.185 "num_base_bdevs_discovered": 3, 00:18:23.185 "num_base_bdevs_operational": 3, 00:18:23.185 "base_bdevs_list": [ 00:18:23.185 { 00:18:23.185 "name": "BaseBdev1", 00:18:23.185 "uuid": "b6b7841f-42e1-52c2-8e80-727f3a2cf9e1", 00:18:23.185 "is_configured": true, 00:18:23.185 "data_offset": 2048, 00:18:23.185 "data_size": 63488 00:18:23.185 }, 00:18:23.185 { 00:18:23.185 "name": "BaseBdev2", 00:18:23.185 "uuid": "123fdffd-a07b-5d7a-a109-599af05a859f", 00:18:23.185 "is_configured": true, 00:18:23.185 "data_offset": 2048, 00:18:23.185 "data_size": 63488 00:18:23.185 }, 00:18:23.185 { 00:18:23.185 "name": "BaseBdev3", 00:18:23.185 "uuid": "908b325d-60d8-5a92-a7d2-ae824c9f83f1", 00:18:23.185 "is_configured": true, 00:18:23.185 "data_offset": 2048, 00:18:23.185 "data_size": 63488 00:18:23.185 } 00:18:23.185 ] 00:18:23.185 }' 00:18:23.185 11:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.185 11:00:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:23.752 11:00:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:24.316 [2024-07-25 11:00:31.129981] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.316 [2024-07-25 11:00:31.130024] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.316 [2024-07-25 11:00:31.133313] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.316 [2024-07-25 11:00:31.133362] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:24.316 [2024-07-25 11:00:31.133411] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.316 [2024-07-25 11:00:31.133430] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:24.316 0 00:18:24.316 11:00:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3595010 00:18:24.316 11:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3595010 ']' 00:18:24.316 11:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3595010 00:18:24.316 11:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:18:24.316 11:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:24.316 11:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3595010 00:18:24.316 11:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:24.316 11:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:24.316 11:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3595010' 00:18:24.316 killing process with pid 3595010 00:18:24.316 11:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3595010 00:18:24.316 11:00:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3595010 00:18:24.316 [2024-07-25 11:00:31.217168] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:24.580 [2024-07-25 11:00:31.462241] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.482 11:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.eCjAFNkXZb 00:18:26.482 11:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:18:26.482 11:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:18:26.482 11:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.41 00:18:26.482 11:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:18:26.482 11:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:26.482 11:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:26.482 11:00:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.41 != \0\.\0\0 ]] 00:18:26.482 00:18:26.482 real 0m9.046s 00:18:26.482 user 0m12.894s 00:18:26.482 sys 0m1.427s 00:18:26.482 11:00:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:26.482 11:00:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.482 ************************************ 00:18:26.482 END TEST raid_read_error_test 00:18:26.482 ************************************ 00:18:26.483 11:00:33 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:18:26.483 11:00:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:26.483 11:00:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:26.483 11:00:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:26.483 ************************************ 00:18:26.483 START TEST raid_write_error_test 00:18:26.483 ************************************ 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.qZfUEshaKM 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3596571 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3596571 /var/tmp/spdk-raid.sock 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3596571 ']' 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:26.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.483 11:00:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:26.483 [2024-07-25 11:00:33.473033] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:26.483 [2024-07-25 11:00:33.473162] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3596571 ] 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:26.740 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.740 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:26.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.741 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:26.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.741 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:26.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.741 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:26.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.741 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:26.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.741 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:26.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.741 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:26.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.741 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:26.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.741 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:26.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.741 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:26.741 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:26.741 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:26.741 [2024-07-25 11:00:33.698119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.997 [2024-07-25 11:00:33.978920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.255 [2024-07-25 11:00:34.327586] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.255 [2024-07-25 11:00:34.327622] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.524 11:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:27.524 11:00:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:18:27.524 11:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:27.524 11:00:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:28.116 BaseBdev1_malloc 00:18:28.116 11:00:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:28.683 true 00:18:28.683 11:00:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:28.683 [2024-07-25 11:00:35.788446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:28.683 [2024-07-25 11:00:35.788509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.683 [2024-07-25 11:00:35.788536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:18:28.683 [2024-07-25 11:00:35.788557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.683 [2024-07-25 11:00:35.791351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.683 [2024-07-25 11:00:35.791390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:28.683 BaseBdev1 00:18:28.941 11:00:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:28.941 11:00:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:29.507 BaseBdev2_malloc 00:18:29.507 11:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:29.507 true 00:18:29.507 11:00:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:30.086 [2024-07-25 11:00:37.077543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:30.086 [2024-07-25 11:00:37.077601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.086 [2024-07-25 11:00:37.077628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:18:30.086 [2024-07-25 11:00:37.077649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.086 [2024-07-25 11:00:37.080446] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.086 [2024-07-25 11:00:37.080485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:30.086 BaseBdev2 00:18:30.086 11:00:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:18:30.086 11:00:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:30.652 BaseBdev3_malloc 00:18:30.652 11:00:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:30.910 true 00:18:30.910 11:00:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:31.477 [2024-07-25 11:00:38.366572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:31.477 [2024-07-25 11:00:38.366636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.477 [2024-07-25 11:00:38.366664] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:18:31.477 [2024-07-25 11:00:38.366682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.477 [2024-07-25 11:00:38.369475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.477 [2024-07-25 11:00:38.369517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:31.477 BaseBdev3 00:18:31.477 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:31.735 [2024-07-25 11:00:38.603240] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.735 [2024-07-25 11:00:38.605596] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.735 [2024-07-25 11:00:38.605685] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.735 [2024-07-25 11:00:38.605949] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:31.735 [2024-07-25 11:00:38.605965] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:31.735 [2024-07-25 11:00:38.606310] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:18:31.735 [2024-07-25 11:00:38.606557] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:31.735 [2024-07-25 11:00:38.606578] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:31.735 [2024-07-25 11:00:38.606815] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.735 "name": "raid_bdev1", 00:18:31.735 "uuid": "dac1782f-09fa-4ce9-b818-602a7008717f", 00:18:31.735 "strip_size_kb": 64, 00:18:31.735 "state": "online", 00:18:31.735 "raid_level": "raid0", 00:18:31.735 "superblock": true, 00:18:31.735 "num_base_bdevs": 3, 00:18:31.735 "num_base_bdevs_discovered": 3, 00:18:31.735 "num_base_bdevs_operational": 3, 00:18:31.735 "base_bdevs_list": [ 00:18:31.735 { 00:18:31.735 "name": "BaseBdev1", 00:18:31.735 "uuid": "040bb823-deb6-5bfa-a6ce-07f755cd21dd", 00:18:31.735 "is_configured": true, 00:18:31.735 "data_offset": 2048, 00:18:31.735 "data_size": 63488 00:18:31.735 }, 00:18:31.735 { 00:18:31.735 "name": "BaseBdev2", 00:18:31.735 "uuid": "f86cb72a-4d33-58f3-93cb-ffc9e511634a", 00:18:31.735 "is_configured": true, 00:18:31.735 "data_offset": 2048, 00:18:31.735 "data_size": 63488 00:18:31.735 }, 00:18:31.735 { 00:18:31.735 "name": "BaseBdev3", 00:18:31.735 "uuid": "d8657d7e-8a6b-51d0-92e9-45e4abddf73b", 00:18:31.735 "is_configured": true, 00:18:31.735 "data_offset": 2048, 00:18:31.735 "data_size": 63488 00:18:31.735 } 00:18:31.735 ] 00:18:31.735 }' 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.735 11:00:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.306 11:00:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:18:32.306 11:00:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:32.565 [2024-07-25 11:00:39.519678] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:18:33.500 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.759 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.016 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.016 "name": "raid_bdev1", 00:18:34.016 "uuid": "dac1782f-09fa-4ce9-b818-602a7008717f", 00:18:34.016 "strip_size_kb": 64, 00:18:34.016 "state": "online", 00:18:34.016 "raid_level": "raid0", 00:18:34.016 "superblock": true, 00:18:34.016 "num_base_bdevs": 3, 00:18:34.016 "num_base_bdevs_discovered": 3, 00:18:34.016 "num_base_bdevs_operational": 3, 00:18:34.016 "base_bdevs_list": [ 00:18:34.016 { 00:18:34.016 "name": "BaseBdev1", 00:18:34.016 "uuid": "040bb823-deb6-5bfa-a6ce-07f755cd21dd", 00:18:34.016 "is_configured": true, 00:18:34.016 "data_offset": 2048, 00:18:34.016 "data_size": 63488 00:18:34.016 }, 00:18:34.016 { 00:18:34.016 "name": "BaseBdev2", 00:18:34.016 "uuid": "f86cb72a-4d33-58f3-93cb-ffc9e511634a", 00:18:34.016 "is_configured": true, 00:18:34.016 "data_offset": 2048, 00:18:34.016 "data_size": 63488 00:18:34.016 }, 00:18:34.016 { 00:18:34.016 "name": "BaseBdev3", 00:18:34.016 "uuid": "d8657d7e-8a6b-51d0-92e9-45e4abddf73b", 00:18:34.016 "is_configured": true, 00:18:34.016 "data_offset": 2048, 00:18:34.016 "data_size": 63488 00:18:34.016 } 00:18:34.016 ] 00:18:34.016 }' 00:18:34.016 11:00:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.016 11:00:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.583 11:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:34.583 [2024-07-25 11:00:41.639362] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:34.583 [2024-07-25 11:00:41.639400] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:34.583 [2024-07-25 11:00:41.642661] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:34.583 [2024-07-25 11:00:41.642709] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.583 [2024-07-25 11:00:41.642756] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:34.583 [2024-07-25 11:00:41.642771] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:34.583 0 00:18:34.583 11:00:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3596571 00:18:34.583 11:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3596571 ']' 00:18:34.583 11:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3596571 00:18:34.583 11:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:18:34.583 11:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:34.583 11:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3596571 00:18:34.842 11:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:34.842 11:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:34.842 11:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3596571' 00:18:34.842 killing process with pid 3596571 00:18:34.842 11:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3596571 00:18:34.842 [2024-07-25 11:00:41.708949] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:34.842 11:00:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3596571 00:18:34.842 [2024-07-25 11:00:41.937644] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:36.747 11:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.qZfUEshaKM 00:18:36.747 11:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:18:36.747 11:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:18:36.747 11:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:18:36.747 11:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:18:36.747 11:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:36.747 11:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:36.747 11:00:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:18:36.747 00:18:36.747 real 0m10.326s 00:18:36.747 user 0m15.395s 00:18:36.747 sys 0m1.504s 00:18:36.747 11:00:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:36.747 11:00:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.747 ************************************ 00:18:36.747 END TEST raid_write_error_test 00:18:36.747 ************************************ 00:18:36.747 11:00:43 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:18:36.747 11:00:43 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:18:36.747 11:00:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:36.747 11:00:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:36.747 11:00:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:36.747 ************************************ 00:18:36.747 START TEST raid_state_function_test 00:18:36.747 ************************************ 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3598506 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3598506' 00:18:36.748 Process raid pid: 3598506 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3598506 /var/tmp/spdk-raid.sock 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3598506 ']' 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:36.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.748 11:00:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.007 [2024-07-25 11:00:43.880395] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:37.007 [2024-07-25 11:00:43.880518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:01.0 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:01.1 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:01.2 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:01.3 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:01.4 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:01.5 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:01.6 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:01.7 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:02.0 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:02.1 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:02.2 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:02.3 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:02.4 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:02.5 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:02.6 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3d:02.7 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:01.0 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:01.1 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:01.2 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:01.3 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:01.4 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:01.5 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:01.6 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:01.7 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:02.0 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:02.1 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:02.2 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:02.3 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:02.4 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:02.5 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.007 EAL: Requested device 0000:3f:02.6 cannot be used 00:18:37.007 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:18:37.008 EAL: Requested device 0000:3f:02.7 cannot be used 00:18:37.008 [2024-07-25 11:00:44.111670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.575 [2024-07-25 11:00:44.395988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.835 [2024-07-25 11:00:44.758208] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.835 [2024-07-25 11:00:44.758247] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:37.835 11:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.835 11:00:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:18:37.835 11:00:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:38.093 [2024-07-25 11:00:45.151304] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:38.093 [2024-07-25 11:00:45.151365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:38.093 [2024-07-25 11:00:45.151381] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:38.093 [2024-07-25 11:00:45.151398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:38.093 [2024-07-25 11:00:45.151409] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:38.093 [2024-07-25 11:00:45.151425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:38.093 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:38.093 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:38.093 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:38.093 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:38.093 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:38.093 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:38.093 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:38.093 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:38.093 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:38.093 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:38.093 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.093 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:38.351 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.351 "name": "Existed_Raid", 00:18:38.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.351 "strip_size_kb": 64, 00:18:38.351 "state": "configuring", 00:18:38.351 "raid_level": "concat", 00:18:38.351 "superblock": false, 00:18:38.351 "num_base_bdevs": 3, 00:18:38.351 "num_base_bdevs_discovered": 0, 00:18:38.351 "num_base_bdevs_operational": 3, 00:18:38.351 "base_bdevs_list": [ 00:18:38.351 { 00:18:38.351 "name": "BaseBdev1", 00:18:38.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.351 "is_configured": false, 00:18:38.351 "data_offset": 0, 00:18:38.351 "data_size": 0 00:18:38.351 }, 00:18:38.351 { 00:18:38.351 "name": "BaseBdev2", 00:18:38.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.351 "is_configured": false, 00:18:38.351 "data_offset": 0, 00:18:38.351 "data_size": 0 00:18:38.351 }, 00:18:38.351 { 00:18:38.351 "name": "BaseBdev3", 00:18:38.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.352 "is_configured": false, 00:18:38.352 "data_offset": 0, 00:18:38.352 "data_size": 0 00:18:38.352 } 00:18:38.352 ] 00:18:38.352 }' 00:18:38.352 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.352 11:00:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.917 11:00:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:39.175 [2024-07-25 11:00:46.169866] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:39.175 [2024-07-25 11:00:46.169914] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:39.175 11:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:39.433 [2024-07-25 11:00:46.398541] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:39.433 [2024-07-25 11:00:46.398589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:39.433 [2024-07-25 11:00:46.398603] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:39.433 [2024-07-25 11:00:46.398623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:39.433 [2024-07-25 11:00:46.398635] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:39.433 [2024-07-25 11:00:46.398651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:39.433 11:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:39.691 [2024-07-25 11:00:46.676275] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:39.691 BaseBdev1 00:18:39.691 11:00:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:39.691 11:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:39.691 11:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:39.691 11:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:39.691 11:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:39.691 11:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:39.691 11:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:39.950 11:00:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:40.209 [ 00:18:40.209 { 00:18:40.209 "name": "BaseBdev1", 00:18:40.209 "aliases": [ 00:18:40.209 "bbdc9ea1-74e1-42ff-a593-3db8415de3a2" 00:18:40.209 ], 00:18:40.209 "product_name": "Malloc disk", 00:18:40.209 "block_size": 512, 00:18:40.209 "num_blocks": 65536, 00:18:40.209 "uuid": "bbdc9ea1-74e1-42ff-a593-3db8415de3a2", 00:18:40.209 "assigned_rate_limits": { 00:18:40.209 "rw_ios_per_sec": 0, 00:18:40.209 "rw_mbytes_per_sec": 0, 00:18:40.209 "r_mbytes_per_sec": 0, 00:18:40.209 "w_mbytes_per_sec": 0 00:18:40.209 }, 00:18:40.209 "claimed": true, 00:18:40.209 "claim_type": "exclusive_write", 00:18:40.209 "zoned": false, 00:18:40.209 "supported_io_types": { 00:18:40.209 "read": true, 00:18:40.209 "write": true, 00:18:40.209 "unmap": true, 00:18:40.209 "flush": true, 00:18:40.209 "reset": true, 00:18:40.209 "nvme_admin": false, 00:18:40.209 "nvme_io": false, 00:18:40.209 "nvme_io_md": false, 00:18:40.209 "write_zeroes": true, 00:18:40.209 "zcopy": true, 00:18:40.209 "get_zone_info": false, 00:18:40.209 "zone_management": false, 00:18:40.209 "zone_append": false, 00:18:40.209 "compare": false, 00:18:40.209 "compare_and_write": false, 00:18:40.209 "abort": true, 00:18:40.209 "seek_hole": false, 00:18:40.209 "seek_data": false, 00:18:40.209 "copy": true, 00:18:40.209 "nvme_iov_md": false 00:18:40.209 }, 00:18:40.209 "memory_domains": [ 00:18:40.209 { 00:18:40.209 "dma_device_id": "system", 00:18:40.209 "dma_device_type": 1 00:18:40.209 }, 00:18:40.209 { 00:18:40.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.209 "dma_device_type": 2 00:18:40.209 } 00:18:40.209 ], 00:18:40.209 "driver_specific": {} 00:18:40.209 } 00:18:40.209 ] 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.209 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.467 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:40.467 "name": "Existed_Raid", 00:18:40.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.467 "strip_size_kb": 64, 00:18:40.467 "state": "configuring", 00:18:40.467 "raid_level": "concat", 00:18:40.467 "superblock": false, 00:18:40.467 "num_base_bdevs": 3, 00:18:40.467 "num_base_bdevs_discovered": 1, 00:18:40.467 "num_base_bdevs_operational": 3, 00:18:40.467 "base_bdevs_list": [ 00:18:40.467 { 00:18:40.467 "name": "BaseBdev1", 00:18:40.467 "uuid": "bbdc9ea1-74e1-42ff-a593-3db8415de3a2", 00:18:40.467 "is_configured": true, 00:18:40.468 "data_offset": 0, 00:18:40.468 "data_size": 65536 00:18:40.468 }, 00:18:40.468 { 00:18:40.468 "name": "BaseBdev2", 00:18:40.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.468 "is_configured": false, 00:18:40.468 "data_offset": 0, 00:18:40.468 "data_size": 0 00:18:40.468 }, 00:18:40.468 { 00:18:40.468 "name": "BaseBdev3", 00:18:40.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.468 "is_configured": false, 00:18:40.468 "data_offset": 0, 00:18:40.468 "data_size": 0 00:18:40.468 } 00:18:40.468 ] 00:18:40.468 }' 00:18:40.468 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:40.468 11:00:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.036 11:00:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:41.333 [2024-07-25 11:00:48.156331] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:41.333 [2024-07-25 11:00:48.156396] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:41.333 [2024-07-25 11:00:48.381026] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.333 [2024-07-25 11:00:48.383382] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.333 [2024-07-25 11:00:48.383429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.333 [2024-07-25 11:00:48.383445] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:41.333 [2024-07-25 11:00:48.383462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.333 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.592 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.592 "name": "Existed_Raid", 00:18:41.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.592 "strip_size_kb": 64, 00:18:41.592 "state": "configuring", 00:18:41.592 "raid_level": "concat", 00:18:41.592 "superblock": false, 00:18:41.592 "num_base_bdevs": 3, 00:18:41.592 "num_base_bdevs_discovered": 1, 00:18:41.592 "num_base_bdevs_operational": 3, 00:18:41.592 "base_bdevs_list": [ 00:18:41.592 { 00:18:41.592 "name": "BaseBdev1", 00:18:41.592 "uuid": "bbdc9ea1-74e1-42ff-a593-3db8415de3a2", 00:18:41.592 "is_configured": true, 00:18:41.592 "data_offset": 0, 00:18:41.592 "data_size": 65536 00:18:41.592 }, 00:18:41.592 { 00:18:41.592 "name": "BaseBdev2", 00:18:41.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.592 "is_configured": false, 00:18:41.592 "data_offset": 0, 00:18:41.592 "data_size": 0 00:18:41.592 }, 00:18:41.592 { 00:18:41.592 "name": "BaseBdev3", 00:18:41.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.592 "is_configured": false, 00:18:41.592 "data_offset": 0, 00:18:41.592 "data_size": 0 00:18:41.592 } 00:18:41.592 ] 00:18:41.592 }' 00:18:41.592 11:00:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.592 11:00:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.162 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:42.421 [2024-07-25 11:00:49.367204] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:42.421 BaseBdev2 00:18:42.421 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:42.421 11:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:42.421 11:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:42.421 11:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:42.421 11:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:42.421 11:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:42.421 11:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:42.680 11:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:42.943 [ 00:18:42.943 { 00:18:42.943 "name": "BaseBdev2", 00:18:42.943 "aliases": [ 00:18:42.943 "774a44c6-5db2-4754-9397-44356774710f" 00:18:42.943 ], 00:18:42.943 "product_name": "Malloc disk", 00:18:42.943 "block_size": 512, 00:18:42.943 "num_blocks": 65536, 00:18:42.943 "uuid": "774a44c6-5db2-4754-9397-44356774710f", 00:18:42.943 "assigned_rate_limits": { 00:18:42.943 "rw_ios_per_sec": 0, 00:18:42.943 "rw_mbytes_per_sec": 0, 00:18:42.943 "r_mbytes_per_sec": 0, 00:18:42.943 "w_mbytes_per_sec": 0 00:18:42.943 }, 00:18:42.943 "claimed": true, 00:18:42.943 "claim_type": "exclusive_write", 00:18:42.943 "zoned": false, 00:18:42.943 "supported_io_types": { 00:18:42.943 "read": true, 00:18:42.943 "write": true, 00:18:42.943 "unmap": true, 00:18:42.943 "flush": true, 00:18:42.943 "reset": true, 00:18:42.943 "nvme_admin": false, 00:18:42.943 "nvme_io": false, 00:18:42.943 "nvme_io_md": false, 00:18:42.943 "write_zeroes": true, 00:18:42.943 "zcopy": true, 00:18:42.943 "get_zone_info": false, 00:18:42.943 "zone_management": false, 00:18:42.943 "zone_append": false, 00:18:42.943 "compare": false, 00:18:42.943 "compare_and_write": false, 00:18:42.943 "abort": true, 00:18:42.943 "seek_hole": false, 00:18:42.943 "seek_data": false, 00:18:42.943 "copy": true, 00:18:42.943 "nvme_iov_md": false 00:18:42.943 }, 00:18:42.943 "memory_domains": [ 00:18:42.943 { 00:18:42.943 "dma_device_id": "system", 00:18:42.943 "dma_device_type": 1 00:18:42.943 }, 00:18:42.944 { 00:18:42.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.944 "dma_device_type": 2 00:18:42.944 } 00:18:42.944 ], 00:18:42.944 "driver_specific": {} 00:18:42.944 } 00:18:42.944 ] 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.944 11:00:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.205 11:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.205 "name": "Existed_Raid", 00:18:43.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.205 "strip_size_kb": 64, 00:18:43.205 "state": "configuring", 00:18:43.205 "raid_level": "concat", 00:18:43.205 "superblock": false, 00:18:43.205 "num_base_bdevs": 3, 00:18:43.205 "num_base_bdevs_discovered": 2, 00:18:43.205 "num_base_bdevs_operational": 3, 00:18:43.205 "base_bdevs_list": [ 00:18:43.205 { 00:18:43.205 "name": "BaseBdev1", 00:18:43.205 "uuid": "bbdc9ea1-74e1-42ff-a593-3db8415de3a2", 00:18:43.205 "is_configured": true, 00:18:43.205 "data_offset": 0, 00:18:43.205 "data_size": 65536 00:18:43.205 }, 00:18:43.205 { 00:18:43.205 "name": "BaseBdev2", 00:18:43.205 "uuid": "774a44c6-5db2-4754-9397-44356774710f", 00:18:43.205 "is_configured": true, 00:18:43.205 "data_offset": 0, 00:18:43.205 "data_size": 65536 00:18:43.205 }, 00:18:43.205 { 00:18:43.205 "name": "BaseBdev3", 00:18:43.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.205 "is_configured": false, 00:18:43.205 "data_offset": 0, 00:18:43.205 "data_size": 0 00:18:43.205 } 00:18:43.205 ] 00:18:43.205 }' 00:18:43.205 11:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.205 11:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.769 11:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:43.769 [2024-07-25 11:00:50.858546] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:43.769 [2024-07-25 11:00:50.858591] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:43.769 [2024-07-25 11:00:50.858609] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:43.769 [2024-07-25 11:00:50.858942] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:18:43.769 [2024-07-25 11:00:50.859194] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:43.769 [2024-07-25 11:00:50.859210] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:43.769 [2024-07-25 11:00:50.859541] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.769 BaseBdev3 00:18:43.769 11:00:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:43.769 11:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:43.769 11:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:43.769 11:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:43.769 11:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:43.769 11:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:43.769 11:00:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:44.026 11:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:44.283 [ 00:18:44.283 { 00:18:44.283 "name": "BaseBdev3", 00:18:44.283 "aliases": [ 00:18:44.283 "388d79d5-790e-4672-ab55-0a43a58d14b1" 00:18:44.283 ], 00:18:44.283 "product_name": "Malloc disk", 00:18:44.283 "block_size": 512, 00:18:44.283 "num_blocks": 65536, 00:18:44.283 "uuid": "388d79d5-790e-4672-ab55-0a43a58d14b1", 00:18:44.283 "assigned_rate_limits": { 00:18:44.283 "rw_ios_per_sec": 0, 00:18:44.283 "rw_mbytes_per_sec": 0, 00:18:44.283 "r_mbytes_per_sec": 0, 00:18:44.283 "w_mbytes_per_sec": 0 00:18:44.283 }, 00:18:44.283 "claimed": true, 00:18:44.283 "claim_type": "exclusive_write", 00:18:44.283 "zoned": false, 00:18:44.283 "supported_io_types": { 00:18:44.283 "read": true, 00:18:44.283 "write": true, 00:18:44.283 "unmap": true, 00:18:44.283 "flush": true, 00:18:44.283 "reset": true, 00:18:44.283 "nvme_admin": false, 00:18:44.283 "nvme_io": false, 00:18:44.283 "nvme_io_md": false, 00:18:44.283 "write_zeroes": true, 00:18:44.283 "zcopy": true, 00:18:44.283 "get_zone_info": false, 00:18:44.283 "zone_management": false, 00:18:44.283 "zone_append": false, 00:18:44.283 "compare": false, 00:18:44.283 "compare_and_write": false, 00:18:44.283 "abort": true, 00:18:44.283 "seek_hole": false, 00:18:44.283 "seek_data": false, 00:18:44.283 "copy": true, 00:18:44.283 "nvme_iov_md": false 00:18:44.283 }, 00:18:44.283 "memory_domains": [ 00:18:44.283 { 00:18:44.283 "dma_device_id": "system", 00:18:44.283 "dma_device_type": 1 00:18:44.283 }, 00:18:44.283 { 00:18:44.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:44.283 "dma_device_type": 2 00:18:44.283 } 00:18:44.283 ], 00:18:44.283 "driver_specific": {} 00:18:44.283 } 00:18:44.283 ] 00:18:44.283 11:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:44.283 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:44.283 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:44.283 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:44.283 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:44.284 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:44.284 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:44.284 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:44.284 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:44.284 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:44.284 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:44.284 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:44.284 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:44.284 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.284 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:44.541 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.541 "name": "Existed_Raid", 00:18:44.541 "uuid": "2afc4e68-efbf-4b5a-889f-c011b5fa5b5a", 00:18:44.541 "strip_size_kb": 64, 00:18:44.541 "state": "online", 00:18:44.541 "raid_level": "concat", 00:18:44.541 "superblock": false, 00:18:44.541 "num_base_bdevs": 3, 00:18:44.541 "num_base_bdevs_discovered": 3, 00:18:44.541 "num_base_bdevs_operational": 3, 00:18:44.541 "base_bdevs_list": [ 00:18:44.541 { 00:18:44.541 "name": "BaseBdev1", 00:18:44.541 "uuid": "bbdc9ea1-74e1-42ff-a593-3db8415de3a2", 00:18:44.541 "is_configured": true, 00:18:44.541 "data_offset": 0, 00:18:44.541 "data_size": 65536 00:18:44.541 }, 00:18:44.541 { 00:18:44.541 "name": "BaseBdev2", 00:18:44.541 "uuid": "774a44c6-5db2-4754-9397-44356774710f", 00:18:44.541 "is_configured": true, 00:18:44.541 "data_offset": 0, 00:18:44.541 "data_size": 65536 00:18:44.541 }, 00:18:44.541 { 00:18:44.541 "name": "BaseBdev3", 00:18:44.541 "uuid": "388d79d5-790e-4672-ab55-0a43a58d14b1", 00:18:44.541 "is_configured": true, 00:18:44.541 "data_offset": 0, 00:18:44.541 "data_size": 65536 00:18:44.541 } 00:18:44.541 ] 00:18:44.541 }' 00:18:44.541 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.541 11:00:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.112 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:45.112 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:45.112 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:45.112 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:45.112 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:45.112 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:45.112 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:45.112 11:00:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:45.112 [2024-07-25 11:00:52.146557] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:45.112 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:45.112 "name": "Existed_Raid", 00:18:45.112 "aliases": [ 00:18:45.112 "2afc4e68-efbf-4b5a-889f-c011b5fa5b5a" 00:18:45.112 ], 00:18:45.112 "product_name": "Raid Volume", 00:18:45.112 "block_size": 512, 00:18:45.112 "num_blocks": 196608, 00:18:45.112 "uuid": "2afc4e68-efbf-4b5a-889f-c011b5fa5b5a", 00:18:45.112 "assigned_rate_limits": { 00:18:45.112 "rw_ios_per_sec": 0, 00:18:45.112 "rw_mbytes_per_sec": 0, 00:18:45.112 "r_mbytes_per_sec": 0, 00:18:45.112 "w_mbytes_per_sec": 0 00:18:45.112 }, 00:18:45.112 "claimed": false, 00:18:45.112 "zoned": false, 00:18:45.112 "supported_io_types": { 00:18:45.112 "read": true, 00:18:45.112 "write": true, 00:18:45.112 "unmap": true, 00:18:45.112 "flush": true, 00:18:45.112 "reset": true, 00:18:45.112 "nvme_admin": false, 00:18:45.112 "nvme_io": false, 00:18:45.112 "nvme_io_md": false, 00:18:45.112 "write_zeroes": true, 00:18:45.113 "zcopy": false, 00:18:45.113 "get_zone_info": false, 00:18:45.113 "zone_management": false, 00:18:45.113 "zone_append": false, 00:18:45.113 "compare": false, 00:18:45.113 "compare_and_write": false, 00:18:45.113 "abort": false, 00:18:45.113 "seek_hole": false, 00:18:45.113 "seek_data": false, 00:18:45.113 "copy": false, 00:18:45.113 "nvme_iov_md": false 00:18:45.113 }, 00:18:45.113 "memory_domains": [ 00:18:45.113 { 00:18:45.113 "dma_device_id": "system", 00:18:45.113 "dma_device_type": 1 00:18:45.113 }, 00:18:45.113 { 00:18:45.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.113 "dma_device_type": 2 00:18:45.113 }, 00:18:45.113 { 00:18:45.113 "dma_device_id": "system", 00:18:45.113 "dma_device_type": 1 00:18:45.113 }, 00:18:45.113 { 00:18:45.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.113 "dma_device_type": 2 00:18:45.113 }, 00:18:45.113 { 00:18:45.113 "dma_device_id": "system", 00:18:45.113 "dma_device_type": 1 00:18:45.113 }, 00:18:45.113 { 00:18:45.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.113 "dma_device_type": 2 00:18:45.113 } 00:18:45.113 ], 00:18:45.113 "driver_specific": { 00:18:45.113 "raid": { 00:18:45.113 "uuid": "2afc4e68-efbf-4b5a-889f-c011b5fa5b5a", 00:18:45.113 "strip_size_kb": 64, 00:18:45.113 "state": "online", 00:18:45.113 "raid_level": "concat", 00:18:45.113 "superblock": false, 00:18:45.113 "num_base_bdevs": 3, 00:18:45.113 "num_base_bdevs_discovered": 3, 00:18:45.113 "num_base_bdevs_operational": 3, 00:18:45.113 "base_bdevs_list": [ 00:18:45.113 { 00:18:45.113 "name": "BaseBdev1", 00:18:45.113 "uuid": "bbdc9ea1-74e1-42ff-a593-3db8415de3a2", 00:18:45.113 "is_configured": true, 00:18:45.113 "data_offset": 0, 00:18:45.113 "data_size": 65536 00:18:45.113 }, 00:18:45.113 { 00:18:45.113 "name": "BaseBdev2", 00:18:45.113 "uuid": "774a44c6-5db2-4754-9397-44356774710f", 00:18:45.113 "is_configured": true, 00:18:45.113 "data_offset": 0, 00:18:45.113 "data_size": 65536 00:18:45.113 }, 00:18:45.113 { 00:18:45.113 "name": "BaseBdev3", 00:18:45.113 "uuid": "388d79d5-790e-4672-ab55-0a43a58d14b1", 00:18:45.113 "is_configured": true, 00:18:45.113 "data_offset": 0, 00:18:45.113 "data_size": 65536 00:18:45.113 } 00:18:45.113 ] 00:18:45.113 } 00:18:45.113 } 00:18:45.113 }' 00:18:45.113 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:45.113 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:45.113 BaseBdev2 00:18:45.113 BaseBdev3' 00:18:45.113 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:45.113 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:45.113 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:45.371 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:45.371 "name": "BaseBdev1", 00:18:45.371 "aliases": [ 00:18:45.371 "bbdc9ea1-74e1-42ff-a593-3db8415de3a2" 00:18:45.371 ], 00:18:45.371 "product_name": "Malloc disk", 00:18:45.371 "block_size": 512, 00:18:45.371 "num_blocks": 65536, 00:18:45.371 "uuid": "bbdc9ea1-74e1-42ff-a593-3db8415de3a2", 00:18:45.371 "assigned_rate_limits": { 00:18:45.371 "rw_ios_per_sec": 0, 00:18:45.371 "rw_mbytes_per_sec": 0, 00:18:45.371 "r_mbytes_per_sec": 0, 00:18:45.371 "w_mbytes_per_sec": 0 00:18:45.371 }, 00:18:45.371 "claimed": true, 00:18:45.371 "claim_type": "exclusive_write", 00:18:45.371 "zoned": false, 00:18:45.371 "supported_io_types": { 00:18:45.371 "read": true, 00:18:45.371 "write": true, 00:18:45.371 "unmap": true, 00:18:45.371 "flush": true, 00:18:45.371 "reset": true, 00:18:45.371 "nvme_admin": false, 00:18:45.371 "nvme_io": false, 00:18:45.371 "nvme_io_md": false, 00:18:45.371 "write_zeroes": true, 00:18:45.371 "zcopy": true, 00:18:45.371 "get_zone_info": false, 00:18:45.371 "zone_management": false, 00:18:45.371 "zone_append": false, 00:18:45.371 "compare": false, 00:18:45.371 "compare_and_write": false, 00:18:45.371 "abort": true, 00:18:45.371 "seek_hole": false, 00:18:45.371 "seek_data": false, 00:18:45.371 "copy": true, 00:18:45.371 "nvme_iov_md": false 00:18:45.371 }, 00:18:45.371 "memory_domains": [ 00:18:45.371 { 00:18:45.371 "dma_device_id": "system", 00:18:45.371 "dma_device_type": 1 00:18:45.371 }, 00:18:45.371 { 00:18:45.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.371 "dma_device_type": 2 00:18:45.371 } 00:18:45.371 ], 00:18:45.371 "driver_specific": {} 00:18:45.371 }' 00:18:45.371 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.371 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:45.371 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:45.371 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.628 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:45.628 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:45.628 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.628 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:45.628 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:45.628 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.628 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:45.628 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:45.628 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:45.628 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:45.629 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:45.886 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:45.886 "name": "BaseBdev2", 00:18:45.886 "aliases": [ 00:18:45.886 "774a44c6-5db2-4754-9397-44356774710f" 00:18:45.886 ], 00:18:45.886 "product_name": "Malloc disk", 00:18:45.886 "block_size": 512, 00:18:45.886 "num_blocks": 65536, 00:18:45.886 "uuid": "774a44c6-5db2-4754-9397-44356774710f", 00:18:45.886 "assigned_rate_limits": { 00:18:45.886 "rw_ios_per_sec": 0, 00:18:45.886 "rw_mbytes_per_sec": 0, 00:18:45.886 "r_mbytes_per_sec": 0, 00:18:45.886 "w_mbytes_per_sec": 0 00:18:45.886 }, 00:18:45.886 "claimed": true, 00:18:45.886 "claim_type": "exclusive_write", 00:18:45.886 "zoned": false, 00:18:45.886 "supported_io_types": { 00:18:45.886 "read": true, 00:18:45.886 "write": true, 00:18:45.886 "unmap": true, 00:18:45.886 "flush": true, 00:18:45.886 "reset": true, 00:18:45.886 "nvme_admin": false, 00:18:45.886 "nvme_io": false, 00:18:45.886 "nvme_io_md": false, 00:18:45.886 "write_zeroes": true, 00:18:45.886 "zcopy": true, 00:18:45.886 "get_zone_info": false, 00:18:45.886 "zone_management": false, 00:18:45.886 "zone_append": false, 00:18:45.886 "compare": false, 00:18:45.886 "compare_and_write": false, 00:18:45.886 "abort": true, 00:18:45.886 "seek_hole": false, 00:18:45.886 "seek_data": false, 00:18:45.886 "copy": true, 00:18:45.886 "nvme_iov_md": false 00:18:45.886 }, 00:18:45.886 "memory_domains": [ 00:18:45.886 { 00:18:45.886 "dma_device_id": "system", 00:18:45.886 "dma_device_type": 1 00:18:45.886 }, 00:18:45.886 { 00:18:45.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:45.886 "dma_device_type": 2 00:18:45.886 } 00:18:45.886 ], 00:18:45.886 "driver_specific": {} 00:18:45.886 }' 00:18:45.886 11:00:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.144 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.144 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:46.144 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.144 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.144 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:46.144 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.144 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.144 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:46.144 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.400 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.400 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:46.400 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:46.400 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:46.400 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:46.657 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:46.657 "name": "BaseBdev3", 00:18:46.657 "aliases": [ 00:18:46.657 "388d79d5-790e-4672-ab55-0a43a58d14b1" 00:18:46.657 ], 00:18:46.657 "product_name": "Malloc disk", 00:18:46.657 "block_size": 512, 00:18:46.657 "num_blocks": 65536, 00:18:46.657 "uuid": "388d79d5-790e-4672-ab55-0a43a58d14b1", 00:18:46.657 "assigned_rate_limits": { 00:18:46.657 "rw_ios_per_sec": 0, 00:18:46.657 "rw_mbytes_per_sec": 0, 00:18:46.657 "r_mbytes_per_sec": 0, 00:18:46.657 "w_mbytes_per_sec": 0 00:18:46.657 }, 00:18:46.657 "claimed": true, 00:18:46.657 "claim_type": "exclusive_write", 00:18:46.657 "zoned": false, 00:18:46.657 "supported_io_types": { 00:18:46.657 "read": true, 00:18:46.657 "write": true, 00:18:46.657 "unmap": true, 00:18:46.657 "flush": true, 00:18:46.657 "reset": true, 00:18:46.657 "nvme_admin": false, 00:18:46.657 "nvme_io": false, 00:18:46.657 "nvme_io_md": false, 00:18:46.657 "write_zeroes": true, 00:18:46.657 "zcopy": true, 00:18:46.657 "get_zone_info": false, 00:18:46.657 "zone_management": false, 00:18:46.657 "zone_append": false, 00:18:46.657 "compare": false, 00:18:46.657 "compare_and_write": false, 00:18:46.657 "abort": true, 00:18:46.657 "seek_hole": false, 00:18:46.657 "seek_data": false, 00:18:46.657 "copy": true, 00:18:46.657 "nvme_iov_md": false 00:18:46.657 }, 00:18:46.657 "memory_domains": [ 00:18:46.657 { 00:18:46.657 "dma_device_id": "system", 00:18:46.657 "dma_device_type": 1 00:18:46.657 }, 00:18:46.657 { 00:18:46.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.657 "dma_device_type": 2 00:18:46.657 } 00:18:46.657 ], 00:18:46.657 "driver_specific": {} 00:18:46.657 }' 00:18:46.657 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.657 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:46.657 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:46.657 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.657 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:46.657 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:46.657 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.657 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:46.914 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:46.914 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.914 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:46.914 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:46.914 11:00:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:47.171 [2024-07-25 11:00:54.095548] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:47.171 [2024-07-25 11:00:54.095582] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:47.171 [2024-07-25 11:00:54.095650] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.171 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:47.171 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:47.171 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:47.171 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:47.171 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:47.171 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:47.171 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:47.171 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:47.172 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:47.172 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:47.172 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:47.172 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:47.172 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:47.172 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:47.172 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:47.172 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.172 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.429 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.429 "name": "Existed_Raid", 00:18:47.429 "uuid": "2afc4e68-efbf-4b5a-889f-c011b5fa5b5a", 00:18:47.429 "strip_size_kb": 64, 00:18:47.429 "state": "offline", 00:18:47.429 "raid_level": "concat", 00:18:47.429 "superblock": false, 00:18:47.429 "num_base_bdevs": 3, 00:18:47.429 "num_base_bdevs_discovered": 2, 00:18:47.429 "num_base_bdevs_operational": 2, 00:18:47.429 "base_bdevs_list": [ 00:18:47.429 { 00:18:47.429 "name": null, 00:18:47.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.429 "is_configured": false, 00:18:47.429 "data_offset": 0, 00:18:47.429 "data_size": 65536 00:18:47.429 }, 00:18:47.429 { 00:18:47.429 "name": "BaseBdev2", 00:18:47.429 "uuid": "774a44c6-5db2-4754-9397-44356774710f", 00:18:47.429 "is_configured": true, 00:18:47.429 "data_offset": 0, 00:18:47.429 "data_size": 65536 00:18:47.429 }, 00:18:47.429 { 00:18:47.429 "name": "BaseBdev3", 00:18:47.429 "uuid": "388d79d5-790e-4672-ab55-0a43a58d14b1", 00:18:47.429 "is_configured": true, 00:18:47.429 "data_offset": 0, 00:18:47.429 "data_size": 65536 00:18:47.429 } 00:18:47.429 ] 00:18:47.429 }' 00:18:47.429 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.429 11:00:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.994 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:47.994 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:47.994 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.994 11:00:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:48.251 11:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:48.251 11:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:48.251 11:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:48.520 [2024-07-25 11:00:55.390857] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:48.520 11:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:48.520 11:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:48.520 11:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.520 11:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:48.777 11:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:48.777 11:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:48.777 11:00:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:49.034 [2024-07-25 11:00:55.976341] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:49.034 [2024-07-25 11:00:55.976405] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:49.034 11:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:49.034 11:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:49.034 11:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.034 11:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:49.291 11:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:49.291 11:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:49.291 11:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:49.291 11:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:49.291 11:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:49.291 11:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:49.548 BaseBdev2 00:18:49.548 11:00:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:49.548 11:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:49.548 11:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:49.548 11:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:49.548 11:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:49.548 11:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:49.548 11:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.806 11:00:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:50.063 [ 00:18:50.063 { 00:18:50.063 "name": "BaseBdev2", 00:18:50.063 "aliases": [ 00:18:50.063 "9fee7b2b-ec66-4672-b9c4-1757ee49418f" 00:18:50.063 ], 00:18:50.063 "product_name": "Malloc disk", 00:18:50.063 "block_size": 512, 00:18:50.063 "num_blocks": 65536, 00:18:50.063 "uuid": "9fee7b2b-ec66-4672-b9c4-1757ee49418f", 00:18:50.063 "assigned_rate_limits": { 00:18:50.063 "rw_ios_per_sec": 0, 00:18:50.063 "rw_mbytes_per_sec": 0, 00:18:50.063 "r_mbytes_per_sec": 0, 00:18:50.063 "w_mbytes_per_sec": 0 00:18:50.063 }, 00:18:50.063 "claimed": false, 00:18:50.063 "zoned": false, 00:18:50.063 "supported_io_types": { 00:18:50.063 "read": true, 00:18:50.063 "write": true, 00:18:50.063 "unmap": true, 00:18:50.063 "flush": true, 00:18:50.063 "reset": true, 00:18:50.063 "nvme_admin": false, 00:18:50.063 "nvme_io": false, 00:18:50.063 "nvme_io_md": false, 00:18:50.063 "write_zeroes": true, 00:18:50.063 "zcopy": true, 00:18:50.063 "get_zone_info": false, 00:18:50.063 "zone_management": false, 00:18:50.063 "zone_append": false, 00:18:50.063 "compare": false, 00:18:50.063 "compare_and_write": false, 00:18:50.063 "abort": true, 00:18:50.063 "seek_hole": false, 00:18:50.063 "seek_data": false, 00:18:50.063 "copy": true, 00:18:50.063 "nvme_iov_md": false 00:18:50.063 }, 00:18:50.063 "memory_domains": [ 00:18:50.063 { 00:18:50.063 "dma_device_id": "system", 00:18:50.063 "dma_device_type": 1 00:18:50.063 }, 00:18:50.063 { 00:18:50.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.063 "dma_device_type": 2 00:18:50.063 } 00:18:50.063 ], 00:18:50.063 "driver_specific": {} 00:18:50.063 } 00:18:50.063 ] 00:18:50.063 11:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:50.063 11:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:50.063 11:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:50.063 11:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:50.321 BaseBdev3 00:18:50.321 11:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:50.321 11:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:18:50.321 11:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:50.321 11:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:50.321 11:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:50.321 11:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:50.321 11:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:50.579 11:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:50.837 [ 00:18:50.837 { 00:18:50.837 "name": "BaseBdev3", 00:18:50.837 "aliases": [ 00:18:50.837 "34a7372c-6131-47ed-a00e-8b7c1563fa31" 00:18:50.837 ], 00:18:50.837 "product_name": "Malloc disk", 00:18:50.837 "block_size": 512, 00:18:50.837 "num_blocks": 65536, 00:18:50.837 "uuid": "34a7372c-6131-47ed-a00e-8b7c1563fa31", 00:18:50.837 "assigned_rate_limits": { 00:18:50.837 "rw_ios_per_sec": 0, 00:18:50.837 "rw_mbytes_per_sec": 0, 00:18:50.837 "r_mbytes_per_sec": 0, 00:18:50.837 "w_mbytes_per_sec": 0 00:18:50.837 }, 00:18:50.837 "claimed": false, 00:18:50.837 "zoned": false, 00:18:50.837 "supported_io_types": { 00:18:50.837 "read": true, 00:18:50.837 "write": true, 00:18:50.837 "unmap": true, 00:18:50.837 "flush": true, 00:18:50.837 "reset": true, 00:18:50.837 "nvme_admin": false, 00:18:50.837 "nvme_io": false, 00:18:50.837 "nvme_io_md": false, 00:18:50.837 "write_zeroes": true, 00:18:50.837 "zcopy": true, 00:18:50.837 "get_zone_info": false, 00:18:50.837 "zone_management": false, 00:18:50.837 "zone_append": false, 00:18:50.837 "compare": false, 00:18:50.837 "compare_and_write": false, 00:18:50.837 "abort": true, 00:18:50.837 "seek_hole": false, 00:18:50.837 "seek_data": false, 00:18:50.837 "copy": true, 00:18:50.837 "nvme_iov_md": false 00:18:50.837 }, 00:18:50.837 "memory_domains": [ 00:18:50.837 { 00:18:50.837 "dma_device_id": "system", 00:18:50.837 "dma_device_type": 1 00:18:50.837 }, 00:18:50.837 { 00:18:50.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.837 "dma_device_type": 2 00:18:50.837 } 00:18:50.837 ], 00:18:50.837 "driver_specific": {} 00:18:50.837 } 00:18:50.837 ] 00:18:50.837 11:00:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:50.837 11:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:50.837 11:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:50.837 11:00:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:51.094 [2024-07-25 11:00:58.016288] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:51.094 [2024-07-25 11:00:58.016339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:51.094 [2024-07-25 11:00:58.016374] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.094 [2024-07-25 11:00:58.018684] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:51.094 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:51.094 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:51.094 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:51.094 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:51.094 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:51.094 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:51.094 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.094 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.094 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.094 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.094 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.094 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.351 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.351 "name": "Existed_Raid", 00:18:51.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.351 "strip_size_kb": 64, 00:18:51.351 "state": "configuring", 00:18:51.351 "raid_level": "concat", 00:18:51.351 "superblock": false, 00:18:51.351 "num_base_bdevs": 3, 00:18:51.351 "num_base_bdevs_discovered": 2, 00:18:51.351 "num_base_bdevs_operational": 3, 00:18:51.351 "base_bdevs_list": [ 00:18:51.351 { 00:18:51.351 "name": "BaseBdev1", 00:18:51.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.351 "is_configured": false, 00:18:51.351 "data_offset": 0, 00:18:51.351 "data_size": 0 00:18:51.351 }, 00:18:51.351 { 00:18:51.351 "name": "BaseBdev2", 00:18:51.351 "uuid": "9fee7b2b-ec66-4672-b9c4-1757ee49418f", 00:18:51.351 "is_configured": true, 00:18:51.351 "data_offset": 0, 00:18:51.351 "data_size": 65536 00:18:51.351 }, 00:18:51.351 { 00:18:51.351 "name": "BaseBdev3", 00:18:51.351 "uuid": "34a7372c-6131-47ed-a00e-8b7c1563fa31", 00:18:51.351 "is_configured": true, 00:18:51.351 "data_offset": 0, 00:18:51.351 "data_size": 65536 00:18:51.351 } 00:18:51.351 ] 00:18:51.351 }' 00:18:51.351 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.351 11:00:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.915 11:00:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:51.915 [2024-07-25 11:00:58.986869] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:51.915 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:51.915 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:51.915 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:51.915 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:51.915 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:51.915 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:51.915 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.915 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.915 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.915 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.915 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.915 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.174 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:52.174 "name": "Existed_Raid", 00:18:52.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.174 "strip_size_kb": 64, 00:18:52.174 "state": "configuring", 00:18:52.174 "raid_level": "concat", 00:18:52.174 "superblock": false, 00:18:52.174 "num_base_bdevs": 3, 00:18:52.174 "num_base_bdevs_discovered": 1, 00:18:52.174 "num_base_bdevs_operational": 3, 00:18:52.174 "base_bdevs_list": [ 00:18:52.174 { 00:18:52.174 "name": "BaseBdev1", 00:18:52.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.174 "is_configured": false, 00:18:52.174 "data_offset": 0, 00:18:52.174 "data_size": 0 00:18:52.174 }, 00:18:52.174 { 00:18:52.174 "name": null, 00:18:52.174 "uuid": "9fee7b2b-ec66-4672-b9c4-1757ee49418f", 00:18:52.174 "is_configured": false, 00:18:52.174 "data_offset": 0, 00:18:52.174 "data_size": 65536 00:18:52.174 }, 00:18:52.174 { 00:18:52.174 "name": "BaseBdev3", 00:18:52.174 "uuid": "34a7372c-6131-47ed-a00e-8b7c1563fa31", 00:18:52.174 "is_configured": true, 00:18:52.174 "data_offset": 0, 00:18:52.174 "data_size": 65536 00:18:52.174 } 00:18:52.174 ] 00:18:52.174 }' 00:18:52.174 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:52.174 11:00:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.740 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.740 11:00:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:52.996 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:52.996 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:53.253 [2024-07-25 11:01:00.281809] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:53.253 BaseBdev1 00:18:53.253 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:53.253 11:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:53.253 11:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:53.253 11:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:18:53.253 11:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:53.253 11:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:53.253 11:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:53.512 11:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:53.768 [ 00:18:53.768 { 00:18:53.768 "name": "BaseBdev1", 00:18:53.768 "aliases": [ 00:18:53.768 "548ed569-e3a7-4609-b321-a194c3f369e8" 00:18:53.768 ], 00:18:53.768 "product_name": "Malloc disk", 00:18:53.768 "block_size": 512, 00:18:53.768 "num_blocks": 65536, 00:18:53.768 "uuid": "548ed569-e3a7-4609-b321-a194c3f369e8", 00:18:53.768 "assigned_rate_limits": { 00:18:53.768 "rw_ios_per_sec": 0, 00:18:53.768 "rw_mbytes_per_sec": 0, 00:18:53.768 "r_mbytes_per_sec": 0, 00:18:53.768 "w_mbytes_per_sec": 0 00:18:53.768 }, 00:18:53.768 "claimed": true, 00:18:53.768 "claim_type": "exclusive_write", 00:18:53.768 "zoned": false, 00:18:53.768 "supported_io_types": { 00:18:53.768 "read": true, 00:18:53.768 "write": true, 00:18:53.768 "unmap": true, 00:18:53.768 "flush": true, 00:18:53.768 "reset": true, 00:18:53.768 "nvme_admin": false, 00:18:53.768 "nvme_io": false, 00:18:53.768 "nvme_io_md": false, 00:18:53.768 "write_zeroes": true, 00:18:53.768 "zcopy": true, 00:18:53.768 "get_zone_info": false, 00:18:53.768 "zone_management": false, 00:18:53.768 "zone_append": false, 00:18:53.768 "compare": false, 00:18:53.768 "compare_and_write": false, 00:18:53.768 "abort": true, 00:18:53.768 "seek_hole": false, 00:18:53.768 "seek_data": false, 00:18:53.768 "copy": true, 00:18:53.768 "nvme_iov_md": false 00:18:53.768 }, 00:18:53.768 "memory_domains": [ 00:18:53.768 { 00:18:53.768 "dma_device_id": "system", 00:18:53.768 "dma_device_type": 1 00:18:53.768 }, 00:18:53.768 { 00:18:53.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.768 "dma_device_type": 2 00:18:53.768 } 00:18:53.768 ], 00:18:53.768 "driver_specific": {} 00:18:53.768 } 00:18:53.768 ] 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.768 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.057 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:54.057 "name": "Existed_Raid", 00:18:54.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.057 "strip_size_kb": 64, 00:18:54.057 "state": "configuring", 00:18:54.057 "raid_level": "concat", 00:18:54.057 "superblock": false, 00:18:54.057 "num_base_bdevs": 3, 00:18:54.057 "num_base_bdevs_discovered": 2, 00:18:54.057 "num_base_bdevs_operational": 3, 00:18:54.057 "base_bdevs_list": [ 00:18:54.057 { 00:18:54.057 "name": "BaseBdev1", 00:18:54.057 "uuid": "548ed569-e3a7-4609-b321-a194c3f369e8", 00:18:54.057 "is_configured": true, 00:18:54.057 "data_offset": 0, 00:18:54.057 "data_size": 65536 00:18:54.057 }, 00:18:54.057 { 00:18:54.057 "name": null, 00:18:54.057 "uuid": "9fee7b2b-ec66-4672-b9c4-1757ee49418f", 00:18:54.057 "is_configured": false, 00:18:54.057 "data_offset": 0, 00:18:54.057 "data_size": 65536 00:18:54.057 }, 00:18:54.057 { 00:18:54.057 "name": "BaseBdev3", 00:18:54.057 "uuid": "34a7372c-6131-47ed-a00e-8b7c1563fa31", 00:18:54.057 "is_configured": true, 00:18:54.057 "data_offset": 0, 00:18:54.057 "data_size": 65536 00:18:54.057 } 00:18:54.057 ] 00:18:54.057 }' 00:18:54.057 11:01:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:54.057 11:01:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.658 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.658 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:54.658 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:54.658 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:54.916 [2024-07-25 11:01:01.978725] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:54.916 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:54.916 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:54.916 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:54.916 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:54.916 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:54.916 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:54.916 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:54.916 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:54.916 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:54.916 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:54.916 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.916 11:01:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.174 11:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.174 "name": "Existed_Raid", 00:18:55.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.174 "strip_size_kb": 64, 00:18:55.174 "state": "configuring", 00:18:55.174 "raid_level": "concat", 00:18:55.174 "superblock": false, 00:18:55.174 "num_base_bdevs": 3, 00:18:55.174 "num_base_bdevs_discovered": 1, 00:18:55.174 "num_base_bdevs_operational": 3, 00:18:55.174 "base_bdevs_list": [ 00:18:55.174 { 00:18:55.174 "name": "BaseBdev1", 00:18:55.174 "uuid": "548ed569-e3a7-4609-b321-a194c3f369e8", 00:18:55.174 "is_configured": true, 00:18:55.174 "data_offset": 0, 00:18:55.174 "data_size": 65536 00:18:55.174 }, 00:18:55.174 { 00:18:55.174 "name": null, 00:18:55.174 "uuid": "9fee7b2b-ec66-4672-b9c4-1757ee49418f", 00:18:55.174 "is_configured": false, 00:18:55.174 "data_offset": 0, 00:18:55.174 "data_size": 65536 00:18:55.174 }, 00:18:55.174 { 00:18:55.174 "name": null, 00:18:55.174 "uuid": "34a7372c-6131-47ed-a00e-8b7c1563fa31", 00:18:55.174 "is_configured": false, 00:18:55.174 "data_offset": 0, 00:18:55.174 "data_size": 65536 00:18:55.174 } 00:18:55.174 ] 00:18:55.174 }' 00:18:55.174 11:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.174 11:01:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.738 11:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.738 11:01:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:55.995 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:55.995 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:56.254 [2024-07-25 11:01:03.246188] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:56.254 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:56.254 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:56.254 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:56.254 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:56.254 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:56.254 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:56.254 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:56.254 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:56.254 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:56.254 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:56.254 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.254 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.511 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:56.511 "name": "Existed_Raid", 00:18:56.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.511 "strip_size_kb": 64, 00:18:56.511 "state": "configuring", 00:18:56.511 "raid_level": "concat", 00:18:56.511 "superblock": false, 00:18:56.511 "num_base_bdevs": 3, 00:18:56.511 "num_base_bdevs_discovered": 2, 00:18:56.511 "num_base_bdevs_operational": 3, 00:18:56.511 "base_bdevs_list": [ 00:18:56.511 { 00:18:56.511 "name": "BaseBdev1", 00:18:56.511 "uuid": "548ed569-e3a7-4609-b321-a194c3f369e8", 00:18:56.511 "is_configured": true, 00:18:56.511 "data_offset": 0, 00:18:56.511 "data_size": 65536 00:18:56.511 }, 00:18:56.511 { 00:18:56.511 "name": null, 00:18:56.511 "uuid": "9fee7b2b-ec66-4672-b9c4-1757ee49418f", 00:18:56.511 "is_configured": false, 00:18:56.511 "data_offset": 0, 00:18:56.511 "data_size": 65536 00:18:56.511 }, 00:18:56.511 { 00:18:56.511 "name": "BaseBdev3", 00:18:56.511 "uuid": "34a7372c-6131-47ed-a00e-8b7c1563fa31", 00:18:56.511 "is_configured": true, 00:18:56.511 "data_offset": 0, 00:18:56.511 "data_size": 65536 00:18:56.511 } 00:18:56.511 ] 00:18:56.511 }' 00:18:56.511 11:01:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:56.511 11:01:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.076 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.076 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:57.333 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:57.333 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:57.591 [2024-07-25 11:01:04.509627] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:57.591 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:57.591 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:57.591 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:57.591 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:57.591 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:57.591 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:57.591 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:57.591 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:57.591 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:57.591 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:57.591 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.591 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.848 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:57.849 "name": "Existed_Raid", 00:18:57.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.849 "strip_size_kb": 64, 00:18:57.849 "state": "configuring", 00:18:57.849 "raid_level": "concat", 00:18:57.849 "superblock": false, 00:18:57.849 "num_base_bdevs": 3, 00:18:57.849 "num_base_bdevs_discovered": 1, 00:18:57.849 "num_base_bdevs_operational": 3, 00:18:57.849 "base_bdevs_list": [ 00:18:57.849 { 00:18:57.849 "name": null, 00:18:57.849 "uuid": "548ed569-e3a7-4609-b321-a194c3f369e8", 00:18:57.849 "is_configured": false, 00:18:57.849 "data_offset": 0, 00:18:57.849 "data_size": 65536 00:18:57.849 }, 00:18:57.849 { 00:18:57.849 "name": null, 00:18:57.849 "uuid": "9fee7b2b-ec66-4672-b9c4-1757ee49418f", 00:18:57.849 "is_configured": false, 00:18:57.849 "data_offset": 0, 00:18:57.849 "data_size": 65536 00:18:57.849 }, 00:18:57.849 { 00:18:57.849 "name": "BaseBdev3", 00:18:57.849 "uuid": "34a7372c-6131-47ed-a00e-8b7c1563fa31", 00:18:57.849 "is_configured": true, 00:18:57.849 "data_offset": 0, 00:18:57.849 "data_size": 65536 00:18:57.849 } 00:18:57.849 ] 00:18:57.849 }' 00:18:57.849 11:01:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:57.849 11:01:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.414 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.414 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:58.672 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:58.672 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:58.929 [2024-07-25 11:01:05.895851] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.929 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:58.929 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:58.929 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:58.929 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:58.929 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:58.929 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:58.929 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:58.929 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:58.930 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:58.930 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:58.930 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.930 11:01:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.187 11:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:59.187 "name": "Existed_Raid", 00:18:59.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.187 "strip_size_kb": 64, 00:18:59.187 "state": "configuring", 00:18:59.187 "raid_level": "concat", 00:18:59.187 "superblock": false, 00:18:59.187 "num_base_bdevs": 3, 00:18:59.187 "num_base_bdevs_discovered": 2, 00:18:59.187 "num_base_bdevs_operational": 3, 00:18:59.187 "base_bdevs_list": [ 00:18:59.187 { 00:18:59.187 "name": null, 00:18:59.187 "uuid": "548ed569-e3a7-4609-b321-a194c3f369e8", 00:18:59.187 "is_configured": false, 00:18:59.187 "data_offset": 0, 00:18:59.187 "data_size": 65536 00:18:59.187 }, 00:18:59.187 { 00:18:59.187 "name": "BaseBdev2", 00:18:59.187 "uuid": "9fee7b2b-ec66-4672-b9c4-1757ee49418f", 00:18:59.187 "is_configured": true, 00:18:59.187 "data_offset": 0, 00:18:59.187 "data_size": 65536 00:18:59.187 }, 00:18:59.187 { 00:18:59.187 "name": "BaseBdev3", 00:18:59.187 "uuid": "34a7372c-6131-47ed-a00e-8b7c1563fa31", 00:18:59.187 "is_configured": true, 00:18:59.187 "data_offset": 0, 00:18:59.187 "data_size": 65536 00:18:59.187 } 00:18:59.187 ] 00:18:59.187 }' 00:18:59.187 11:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:59.187 11:01:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.753 11:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.753 11:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:00.012 11:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:00.012 11:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.012 11:01:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:00.270 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 548ed569-e3a7-4609-b321-a194c3f369e8 00:19:00.528 [2024-07-25 11:01:07.407775] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:00.528 [2024-07-25 11:01:07.407821] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:00.528 [2024-07-25 11:01:07.407836] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:19:00.528 [2024-07-25 11:01:07.408195] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:19:00.528 [2024-07-25 11:01:07.408423] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:00.528 [2024-07-25 11:01:07.408438] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:00.528 [2024-07-25 11:01:07.408743] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.528 NewBaseBdev 00:19:00.528 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:00.528 11:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:00.528 11:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:00.528 11:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:19:00.528 11:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:00.528 11:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:00.528 11:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:00.787 [ 00:19:00.787 { 00:19:00.787 "name": "NewBaseBdev", 00:19:00.787 "aliases": [ 00:19:00.787 "548ed569-e3a7-4609-b321-a194c3f369e8" 00:19:00.787 ], 00:19:00.787 "product_name": "Malloc disk", 00:19:00.787 "block_size": 512, 00:19:00.787 "num_blocks": 65536, 00:19:00.787 "uuid": "548ed569-e3a7-4609-b321-a194c3f369e8", 00:19:00.787 "assigned_rate_limits": { 00:19:00.787 "rw_ios_per_sec": 0, 00:19:00.787 "rw_mbytes_per_sec": 0, 00:19:00.787 "r_mbytes_per_sec": 0, 00:19:00.787 "w_mbytes_per_sec": 0 00:19:00.787 }, 00:19:00.787 "claimed": true, 00:19:00.787 "claim_type": "exclusive_write", 00:19:00.787 "zoned": false, 00:19:00.787 "supported_io_types": { 00:19:00.787 "read": true, 00:19:00.787 "write": true, 00:19:00.787 "unmap": true, 00:19:00.787 "flush": true, 00:19:00.787 "reset": true, 00:19:00.787 "nvme_admin": false, 00:19:00.787 "nvme_io": false, 00:19:00.787 "nvme_io_md": false, 00:19:00.787 "write_zeroes": true, 00:19:00.787 "zcopy": true, 00:19:00.787 "get_zone_info": false, 00:19:00.787 "zone_management": false, 00:19:00.787 "zone_append": false, 00:19:00.787 "compare": false, 00:19:00.787 "compare_and_write": false, 00:19:00.787 "abort": true, 00:19:00.787 "seek_hole": false, 00:19:00.787 "seek_data": false, 00:19:00.787 "copy": true, 00:19:00.787 "nvme_iov_md": false 00:19:00.787 }, 00:19:00.787 "memory_domains": [ 00:19:00.787 { 00:19:00.787 "dma_device_id": "system", 00:19:00.787 "dma_device_type": 1 00:19:00.787 }, 00:19:00.787 { 00:19:00.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.787 "dma_device_type": 2 00:19:00.787 } 00:19:00.787 ], 00:19:00.787 "driver_specific": {} 00:19:00.787 } 00:19:00.787 ] 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.787 11:01:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.046 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.046 "name": "Existed_Raid", 00:19:01.046 "uuid": "080930aa-cfe2-4930-a033-bea9dd3ae973", 00:19:01.046 "strip_size_kb": 64, 00:19:01.046 "state": "online", 00:19:01.046 "raid_level": "concat", 00:19:01.046 "superblock": false, 00:19:01.046 "num_base_bdevs": 3, 00:19:01.046 "num_base_bdevs_discovered": 3, 00:19:01.046 "num_base_bdevs_operational": 3, 00:19:01.046 "base_bdevs_list": [ 00:19:01.046 { 00:19:01.046 "name": "NewBaseBdev", 00:19:01.046 "uuid": "548ed569-e3a7-4609-b321-a194c3f369e8", 00:19:01.046 "is_configured": true, 00:19:01.046 "data_offset": 0, 00:19:01.046 "data_size": 65536 00:19:01.046 }, 00:19:01.046 { 00:19:01.046 "name": "BaseBdev2", 00:19:01.046 "uuid": "9fee7b2b-ec66-4672-b9c4-1757ee49418f", 00:19:01.046 "is_configured": true, 00:19:01.046 "data_offset": 0, 00:19:01.046 "data_size": 65536 00:19:01.046 }, 00:19:01.046 { 00:19:01.046 "name": "BaseBdev3", 00:19:01.046 "uuid": "34a7372c-6131-47ed-a00e-8b7c1563fa31", 00:19:01.046 "is_configured": true, 00:19:01.046 "data_offset": 0, 00:19:01.046 "data_size": 65536 00:19:01.046 } 00:19:01.046 ] 00:19:01.046 }' 00:19:01.046 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.046 11:01:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.612 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:01.612 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:01.612 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:01.612 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:01.612 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:01.612 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:01.612 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:01.612 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:01.870 [2024-07-25 11:01:08.900445] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.870 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:01.870 "name": "Existed_Raid", 00:19:01.870 "aliases": [ 00:19:01.870 "080930aa-cfe2-4930-a033-bea9dd3ae973" 00:19:01.870 ], 00:19:01.870 "product_name": "Raid Volume", 00:19:01.870 "block_size": 512, 00:19:01.870 "num_blocks": 196608, 00:19:01.870 "uuid": "080930aa-cfe2-4930-a033-bea9dd3ae973", 00:19:01.870 "assigned_rate_limits": { 00:19:01.870 "rw_ios_per_sec": 0, 00:19:01.870 "rw_mbytes_per_sec": 0, 00:19:01.870 "r_mbytes_per_sec": 0, 00:19:01.870 "w_mbytes_per_sec": 0 00:19:01.870 }, 00:19:01.870 "claimed": false, 00:19:01.870 "zoned": false, 00:19:01.870 "supported_io_types": { 00:19:01.870 "read": true, 00:19:01.870 "write": true, 00:19:01.870 "unmap": true, 00:19:01.870 "flush": true, 00:19:01.870 "reset": true, 00:19:01.870 "nvme_admin": false, 00:19:01.870 "nvme_io": false, 00:19:01.870 "nvme_io_md": false, 00:19:01.870 "write_zeroes": true, 00:19:01.870 "zcopy": false, 00:19:01.870 "get_zone_info": false, 00:19:01.870 "zone_management": false, 00:19:01.870 "zone_append": false, 00:19:01.870 "compare": false, 00:19:01.870 "compare_and_write": false, 00:19:01.870 "abort": false, 00:19:01.870 "seek_hole": false, 00:19:01.870 "seek_data": false, 00:19:01.870 "copy": false, 00:19:01.870 "nvme_iov_md": false 00:19:01.870 }, 00:19:01.870 "memory_domains": [ 00:19:01.870 { 00:19:01.870 "dma_device_id": "system", 00:19:01.870 "dma_device_type": 1 00:19:01.870 }, 00:19:01.870 { 00:19:01.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.870 "dma_device_type": 2 00:19:01.870 }, 00:19:01.870 { 00:19:01.870 "dma_device_id": "system", 00:19:01.870 "dma_device_type": 1 00:19:01.870 }, 00:19:01.870 { 00:19:01.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.870 "dma_device_type": 2 00:19:01.870 }, 00:19:01.870 { 00:19:01.870 "dma_device_id": "system", 00:19:01.870 "dma_device_type": 1 00:19:01.870 }, 00:19:01.870 { 00:19:01.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:01.870 "dma_device_type": 2 00:19:01.870 } 00:19:01.870 ], 00:19:01.870 "driver_specific": { 00:19:01.870 "raid": { 00:19:01.870 "uuid": "080930aa-cfe2-4930-a033-bea9dd3ae973", 00:19:01.870 "strip_size_kb": 64, 00:19:01.870 "state": "online", 00:19:01.870 "raid_level": "concat", 00:19:01.870 "superblock": false, 00:19:01.870 "num_base_bdevs": 3, 00:19:01.870 "num_base_bdevs_discovered": 3, 00:19:01.870 "num_base_bdevs_operational": 3, 00:19:01.870 "base_bdevs_list": [ 00:19:01.870 { 00:19:01.870 "name": "NewBaseBdev", 00:19:01.870 "uuid": "548ed569-e3a7-4609-b321-a194c3f369e8", 00:19:01.870 "is_configured": true, 00:19:01.870 "data_offset": 0, 00:19:01.870 "data_size": 65536 00:19:01.870 }, 00:19:01.871 { 00:19:01.871 "name": "BaseBdev2", 00:19:01.871 "uuid": "9fee7b2b-ec66-4672-b9c4-1757ee49418f", 00:19:01.871 "is_configured": true, 00:19:01.871 "data_offset": 0, 00:19:01.871 "data_size": 65536 00:19:01.871 }, 00:19:01.871 { 00:19:01.871 "name": "BaseBdev3", 00:19:01.871 "uuid": "34a7372c-6131-47ed-a00e-8b7c1563fa31", 00:19:01.871 "is_configured": true, 00:19:01.871 "data_offset": 0, 00:19:01.871 "data_size": 65536 00:19:01.871 } 00:19:01.871 ] 00:19:01.871 } 00:19:01.871 } 00:19:01.871 }' 00:19:01.871 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:01.871 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:01.871 BaseBdev2 00:19:01.871 BaseBdev3' 00:19:01.871 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:01.871 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:01.871 11:01:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:02.130 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:02.130 "name": "NewBaseBdev", 00:19:02.130 "aliases": [ 00:19:02.130 "548ed569-e3a7-4609-b321-a194c3f369e8" 00:19:02.130 ], 00:19:02.130 "product_name": "Malloc disk", 00:19:02.130 "block_size": 512, 00:19:02.130 "num_blocks": 65536, 00:19:02.130 "uuid": "548ed569-e3a7-4609-b321-a194c3f369e8", 00:19:02.130 "assigned_rate_limits": { 00:19:02.130 "rw_ios_per_sec": 0, 00:19:02.130 "rw_mbytes_per_sec": 0, 00:19:02.130 "r_mbytes_per_sec": 0, 00:19:02.130 "w_mbytes_per_sec": 0 00:19:02.130 }, 00:19:02.130 "claimed": true, 00:19:02.130 "claim_type": "exclusive_write", 00:19:02.130 "zoned": false, 00:19:02.130 "supported_io_types": { 00:19:02.130 "read": true, 00:19:02.130 "write": true, 00:19:02.130 "unmap": true, 00:19:02.130 "flush": true, 00:19:02.130 "reset": true, 00:19:02.130 "nvme_admin": false, 00:19:02.130 "nvme_io": false, 00:19:02.130 "nvme_io_md": false, 00:19:02.130 "write_zeroes": true, 00:19:02.130 "zcopy": true, 00:19:02.130 "get_zone_info": false, 00:19:02.130 "zone_management": false, 00:19:02.130 "zone_append": false, 00:19:02.130 "compare": false, 00:19:02.130 "compare_and_write": false, 00:19:02.130 "abort": true, 00:19:02.130 "seek_hole": false, 00:19:02.130 "seek_data": false, 00:19:02.130 "copy": true, 00:19:02.130 "nvme_iov_md": false 00:19:02.130 }, 00:19:02.130 "memory_domains": [ 00:19:02.130 { 00:19:02.130 "dma_device_id": "system", 00:19:02.130 "dma_device_type": 1 00:19:02.130 }, 00:19:02.130 { 00:19:02.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.130 "dma_device_type": 2 00:19:02.130 } 00:19:02.130 ], 00:19:02.130 "driver_specific": {} 00:19:02.130 }' 00:19:02.130 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:02.130 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:02.389 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:02.389 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:02.389 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:02.389 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:02.389 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:02.389 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:02.389 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:02.389 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:02.389 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:02.647 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:02.647 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:02.647 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:02.647 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:02.904 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:02.904 "name": "BaseBdev2", 00:19:02.904 "aliases": [ 00:19:02.904 "9fee7b2b-ec66-4672-b9c4-1757ee49418f" 00:19:02.904 ], 00:19:02.904 "product_name": "Malloc disk", 00:19:02.904 "block_size": 512, 00:19:02.904 "num_blocks": 65536, 00:19:02.904 "uuid": "9fee7b2b-ec66-4672-b9c4-1757ee49418f", 00:19:02.904 "assigned_rate_limits": { 00:19:02.904 "rw_ios_per_sec": 0, 00:19:02.904 "rw_mbytes_per_sec": 0, 00:19:02.904 "r_mbytes_per_sec": 0, 00:19:02.904 "w_mbytes_per_sec": 0 00:19:02.904 }, 00:19:02.904 "claimed": true, 00:19:02.904 "claim_type": "exclusive_write", 00:19:02.904 "zoned": false, 00:19:02.904 "supported_io_types": { 00:19:02.904 "read": true, 00:19:02.904 "write": true, 00:19:02.904 "unmap": true, 00:19:02.904 "flush": true, 00:19:02.904 "reset": true, 00:19:02.904 "nvme_admin": false, 00:19:02.904 "nvme_io": false, 00:19:02.904 "nvme_io_md": false, 00:19:02.904 "write_zeroes": true, 00:19:02.904 "zcopy": true, 00:19:02.904 "get_zone_info": false, 00:19:02.904 "zone_management": false, 00:19:02.904 "zone_append": false, 00:19:02.904 "compare": false, 00:19:02.904 "compare_and_write": false, 00:19:02.904 "abort": true, 00:19:02.904 "seek_hole": false, 00:19:02.904 "seek_data": false, 00:19:02.904 "copy": true, 00:19:02.904 "nvme_iov_md": false 00:19:02.904 }, 00:19:02.904 "memory_domains": [ 00:19:02.904 { 00:19:02.904 "dma_device_id": "system", 00:19:02.904 "dma_device_type": 1 00:19:02.904 }, 00:19:02.904 { 00:19:02.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.904 "dma_device_type": 2 00:19:02.904 } 00:19:02.904 ], 00:19:02.904 "driver_specific": {} 00:19:02.904 }' 00:19:02.904 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:02.904 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:02.904 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:02.904 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:02.904 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:02.904 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:02.904 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:02.904 11:01:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:03.162 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:03.162 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:03.162 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:03.162 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:03.162 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:03.162 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:03.162 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:03.421 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:03.421 "name": "BaseBdev3", 00:19:03.421 "aliases": [ 00:19:03.421 "34a7372c-6131-47ed-a00e-8b7c1563fa31" 00:19:03.421 ], 00:19:03.421 "product_name": "Malloc disk", 00:19:03.421 "block_size": 512, 00:19:03.421 "num_blocks": 65536, 00:19:03.421 "uuid": "34a7372c-6131-47ed-a00e-8b7c1563fa31", 00:19:03.421 "assigned_rate_limits": { 00:19:03.421 "rw_ios_per_sec": 0, 00:19:03.421 "rw_mbytes_per_sec": 0, 00:19:03.421 "r_mbytes_per_sec": 0, 00:19:03.421 "w_mbytes_per_sec": 0 00:19:03.421 }, 00:19:03.421 "claimed": true, 00:19:03.421 "claim_type": "exclusive_write", 00:19:03.421 "zoned": false, 00:19:03.421 "supported_io_types": { 00:19:03.421 "read": true, 00:19:03.421 "write": true, 00:19:03.421 "unmap": true, 00:19:03.421 "flush": true, 00:19:03.421 "reset": true, 00:19:03.421 "nvme_admin": false, 00:19:03.421 "nvme_io": false, 00:19:03.421 "nvme_io_md": false, 00:19:03.421 "write_zeroes": true, 00:19:03.421 "zcopy": true, 00:19:03.421 "get_zone_info": false, 00:19:03.421 "zone_management": false, 00:19:03.421 "zone_append": false, 00:19:03.421 "compare": false, 00:19:03.421 "compare_and_write": false, 00:19:03.421 "abort": true, 00:19:03.421 "seek_hole": false, 00:19:03.421 "seek_data": false, 00:19:03.421 "copy": true, 00:19:03.421 "nvme_iov_md": false 00:19:03.421 }, 00:19:03.421 "memory_domains": [ 00:19:03.421 { 00:19:03.421 "dma_device_id": "system", 00:19:03.421 "dma_device_type": 1 00:19:03.421 }, 00:19:03.421 { 00:19:03.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.421 "dma_device_type": 2 00:19:03.421 } 00:19:03.421 ], 00:19:03.421 "driver_specific": {} 00:19:03.421 }' 00:19:03.421 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:03.421 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:03.421 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:03.421 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:03.421 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:03.421 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:03.421 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:03.679 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:03.679 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:03.679 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:03.679 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:03.679 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:03.679 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:03.937 [2024-07-25 11:01:10.897639] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:03.937 [2024-07-25 11:01:10.897672] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.937 [2024-07-25 11:01:10.897759] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.937 [2024-07-25 11:01:10.897828] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.937 [2024-07-25 11:01:10.897851] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:03.937 11:01:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3598506 00:19:03.937 11:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3598506 ']' 00:19:03.937 11:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3598506 00:19:03.938 11:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:19:03.938 11:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:03.938 11:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3598506 00:19:03.938 11:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:03.938 11:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:03.938 11:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3598506' 00:19:03.938 killing process with pid 3598506 00:19:03.938 11:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3598506 00:19:03.938 [2024-07-25 11:01:10.974190] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.938 11:01:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3598506 00:19:04.195 [2024-07-25 11:01:11.306910] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:06.094 00:19:06.094 real 0m29.299s 00:19:06.094 user 0m50.965s 00:19:06.094 sys 0m5.194s 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:06.094 ************************************ 00:19:06.094 END TEST raid_state_function_test 00:19:06.094 ************************************ 00:19:06.094 11:01:13 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:19:06.094 11:01:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:06.094 11:01:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:06.094 11:01:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:06.094 ************************************ 00:19:06.094 START TEST raid_state_function_test_sb 00:19:06.094 ************************************ 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:06.094 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3604036 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3604036' 00:19:06.095 Process raid pid: 3604036 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3604036 /var/tmp/spdk-raid.sock 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3604036 ']' 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:06.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:06.095 11:01:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.352 [2024-07-25 11:01:13.262306] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:06.352 [2024-07-25 11:01:13.262418] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.352 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:06.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.352 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:06.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.352 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:06.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.352 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:06.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.352 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:06.352 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:06.353 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:06.353 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:06.610 [2024-07-25 11:01:13.492497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.873 [2024-07-25 11:01:13.767341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.134 [2024-07-25 11:01:14.117265] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.134 [2024-07-25 11:01:14.117303] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:07.391 11:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.391 11:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:19:07.391 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:07.391 [2024-07-25 11:01:14.509267] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:07.392 [2024-07-25 11:01:14.509326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:07.392 [2024-07-25 11:01:14.509342] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:07.392 [2024-07-25 11:01:14.509359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:07.392 [2024-07-25 11:01:14.509375] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:07.392 [2024-07-25 11:01:14.509391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:07.687 "name": "Existed_Raid", 00:19:07.687 "uuid": "a195c9c7-1e22-4860-8c20-569e203ac609", 00:19:07.687 "strip_size_kb": 64, 00:19:07.687 "state": "configuring", 00:19:07.687 "raid_level": "concat", 00:19:07.687 "superblock": true, 00:19:07.687 "num_base_bdevs": 3, 00:19:07.687 "num_base_bdevs_discovered": 0, 00:19:07.687 "num_base_bdevs_operational": 3, 00:19:07.687 "base_bdevs_list": [ 00:19:07.687 { 00:19:07.687 "name": "BaseBdev1", 00:19:07.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.687 "is_configured": false, 00:19:07.687 "data_offset": 0, 00:19:07.687 "data_size": 0 00:19:07.687 }, 00:19:07.687 { 00:19:07.687 "name": "BaseBdev2", 00:19:07.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.687 "is_configured": false, 00:19:07.687 "data_offset": 0, 00:19:07.687 "data_size": 0 00:19:07.687 }, 00:19:07.687 { 00:19:07.687 "name": "BaseBdev3", 00:19:07.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.687 "is_configured": false, 00:19:07.687 "data_offset": 0, 00:19:07.687 "data_size": 0 00:19:07.687 } 00:19:07.687 ] 00:19:07.687 }' 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:07.687 11:01:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.260 11:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:08.518 [2024-07-25 11:01:15.543847] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:08.518 [2024-07-25 11:01:15.543892] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:08.518 11:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:08.776 [2024-07-25 11:01:15.772536] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:08.776 [2024-07-25 11:01:15.772578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:08.776 [2024-07-25 11:01:15.772592] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:08.776 [2024-07-25 11:01:15.772611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:08.776 [2024-07-25 11:01:15.772622] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:08.776 [2024-07-25 11:01:15.772638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:08.776 11:01:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:09.034 [2024-07-25 11:01:16.043952] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:09.034 BaseBdev1 00:19:09.034 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:09.034 11:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:09.034 11:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:09.034 11:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:09.034 11:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:09.034 11:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:09.034 11:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:09.293 11:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:09.556 [ 00:19:09.556 { 00:19:09.556 "name": "BaseBdev1", 00:19:09.556 "aliases": [ 00:19:09.556 "08a8a3d7-8c89-467e-b905-d226a896a7be" 00:19:09.556 ], 00:19:09.556 "product_name": "Malloc disk", 00:19:09.556 "block_size": 512, 00:19:09.556 "num_blocks": 65536, 00:19:09.556 "uuid": "08a8a3d7-8c89-467e-b905-d226a896a7be", 00:19:09.556 "assigned_rate_limits": { 00:19:09.556 "rw_ios_per_sec": 0, 00:19:09.556 "rw_mbytes_per_sec": 0, 00:19:09.556 "r_mbytes_per_sec": 0, 00:19:09.556 "w_mbytes_per_sec": 0 00:19:09.556 }, 00:19:09.556 "claimed": true, 00:19:09.556 "claim_type": "exclusive_write", 00:19:09.556 "zoned": false, 00:19:09.556 "supported_io_types": { 00:19:09.556 "read": true, 00:19:09.556 "write": true, 00:19:09.556 "unmap": true, 00:19:09.556 "flush": true, 00:19:09.556 "reset": true, 00:19:09.556 "nvme_admin": false, 00:19:09.556 "nvme_io": false, 00:19:09.556 "nvme_io_md": false, 00:19:09.556 "write_zeroes": true, 00:19:09.556 "zcopy": true, 00:19:09.556 "get_zone_info": false, 00:19:09.556 "zone_management": false, 00:19:09.556 "zone_append": false, 00:19:09.556 "compare": false, 00:19:09.556 "compare_and_write": false, 00:19:09.556 "abort": true, 00:19:09.556 "seek_hole": false, 00:19:09.556 "seek_data": false, 00:19:09.556 "copy": true, 00:19:09.556 "nvme_iov_md": false 00:19:09.556 }, 00:19:09.556 "memory_domains": [ 00:19:09.556 { 00:19:09.556 "dma_device_id": "system", 00:19:09.556 "dma_device_type": 1 00:19:09.556 }, 00:19:09.556 { 00:19:09.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.556 "dma_device_type": 2 00:19:09.556 } 00:19:09.556 ], 00:19:09.556 "driver_specific": {} 00:19:09.556 } 00:19:09.556 ] 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.556 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.816 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:09.816 "name": "Existed_Raid", 00:19:09.816 "uuid": "65d5855e-67f3-4ece-a96a-8516200114b0", 00:19:09.816 "strip_size_kb": 64, 00:19:09.816 "state": "configuring", 00:19:09.816 "raid_level": "concat", 00:19:09.816 "superblock": true, 00:19:09.816 "num_base_bdevs": 3, 00:19:09.816 "num_base_bdevs_discovered": 1, 00:19:09.816 "num_base_bdevs_operational": 3, 00:19:09.816 "base_bdevs_list": [ 00:19:09.816 { 00:19:09.816 "name": "BaseBdev1", 00:19:09.816 "uuid": "08a8a3d7-8c89-467e-b905-d226a896a7be", 00:19:09.816 "is_configured": true, 00:19:09.816 "data_offset": 2048, 00:19:09.816 "data_size": 63488 00:19:09.816 }, 00:19:09.816 { 00:19:09.816 "name": "BaseBdev2", 00:19:09.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.816 "is_configured": false, 00:19:09.816 "data_offset": 0, 00:19:09.816 "data_size": 0 00:19:09.816 }, 00:19:09.816 { 00:19:09.816 "name": "BaseBdev3", 00:19:09.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.816 "is_configured": false, 00:19:09.816 "data_offset": 0, 00:19:09.816 "data_size": 0 00:19:09.816 } 00:19:09.816 ] 00:19:09.816 }' 00:19:09.816 11:01:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:09.816 11:01:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.382 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:10.641 [2024-07-25 11:01:17.532003] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:10.641 [2024-07-25 11:01:17.532060] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:10.641 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:10.641 [2024-07-25 11:01:17.752695] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:10.641 [2024-07-25 11:01:17.755025] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:10.641 [2024-07-25 11:01:17.755068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:10.641 [2024-07-25 11:01:17.755083] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:10.641 [2024-07-25 11:01:17.755100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:10.902 "name": "Existed_Raid", 00:19:10.902 "uuid": "e3ed938b-bf0a-431b-bba4-b5a6f99227f6", 00:19:10.902 "strip_size_kb": 64, 00:19:10.902 "state": "configuring", 00:19:10.902 "raid_level": "concat", 00:19:10.902 "superblock": true, 00:19:10.902 "num_base_bdevs": 3, 00:19:10.902 "num_base_bdevs_discovered": 1, 00:19:10.902 "num_base_bdevs_operational": 3, 00:19:10.902 "base_bdevs_list": [ 00:19:10.902 { 00:19:10.902 "name": "BaseBdev1", 00:19:10.902 "uuid": "08a8a3d7-8c89-467e-b905-d226a896a7be", 00:19:10.902 "is_configured": true, 00:19:10.902 "data_offset": 2048, 00:19:10.902 "data_size": 63488 00:19:10.902 }, 00:19:10.902 { 00:19:10.902 "name": "BaseBdev2", 00:19:10.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.902 "is_configured": false, 00:19:10.902 "data_offset": 0, 00:19:10.902 "data_size": 0 00:19:10.902 }, 00:19:10.902 { 00:19:10.902 "name": "BaseBdev3", 00:19:10.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.902 "is_configured": false, 00:19:10.902 "data_offset": 0, 00:19:10.902 "data_size": 0 00:19:10.902 } 00:19:10.902 ] 00:19:10.902 }' 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:10.902 11:01:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.468 11:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:11.726 [2024-07-25 11:01:18.785649] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:11.726 BaseBdev2 00:19:11.726 11:01:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:11.726 11:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:11.726 11:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:11.726 11:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:11.726 11:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:11.726 11:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:11.726 11:01:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:11.991 11:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:12.250 [ 00:19:12.250 { 00:19:12.250 "name": "BaseBdev2", 00:19:12.250 "aliases": [ 00:19:12.250 "b31b3662-039e-43f8-95ff-b5f49d554cbe" 00:19:12.250 ], 00:19:12.250 "product_name": "Malloc disk", 00:19:12.250 "block_size": 512, 00:19:12.250 "num_blocks": 65536, 00:19:12.250 "uuid": "b31b3662-039e-43f8-95ff-b5f49d554cbe", 00:19:12.250 "assigned_rate_limits": { 00:19:12.250 "rw_ios_per_sec": 0, 00:19:12.250 "rw_mbytes_per_sec": 0, 00:19:12.250 "r_mbytes_per_sec": 0, 00:19:12.250 "w_mbytes_per_sec": 0 00:19:12.250 }, 00:19:12.250 "claimed": true, 00:19:12.250 "claim_type": "exclusive_write", 00:19:12.250 "zoned": false, 00:19:12.250 "supported_io_types": { 00:19:12.250 "read": true, 00:19:12.250 "write": true, 00:19:12.250 "unmap": true, 00:19:12.250 "flush": true, 00:19:12.250 "reset": true, 00:19:12.250 "nvme_admin": false, 00:19:12.250 "nvme_io": false, 00:19:12.250 "nvme_io_md": false, 00:19:12.250 "write_zeroes": true, 00:19:12.250 "zcopy": true, 00:19:12.250 "get_zone_info": false, 00:19:12.250 "zone_management": false, 00:19:12.250 "zone_append": false, 00:19:12.250 "compare": false, 00:19:12.250 "compare_and_write": false, 00:19:12.250 "abort": true, 00:19:12.250 "seek_hole": false, 00:19:12.250 "seek_data": false, 00:19:12.250 "copy": true, 00:19:12.250 "nvme_iov_md": false 00:19:12.250 }, 00:19:12.250 "memory_domains": [ 00:19:12.250 { 00:19:12.250 "dma_device_id": "system", 00:19:12.250 "dma_device_type": 1 00:19:12.250 }, 00:19:12.250 { 00:19:12.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.250 "dma_device_type": 2 00:19:12.250 } 00:19:12.250 ], 00:19:12.250 "driver_specific": {} 00:19:12.250 } 00:19:12.250 ] 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.250 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.508 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:12.508 "name": "Existed_Raid", 00:19:12.508 "uuid": "e3ed938b-bf0a-431b-bba4-b5a6f99227f6", 00:19:12.508 "strip_size_kb": 64, 00:19:12.508 "state": "configuring", 00:19:12.508 "raid_level": "concat", 00:19:12.508 "superblock": true, 00:19:12.508 "num_base_bdevs": 3, 00:19:12.508 "num_base_bdevs_discovered": 2, 00:19:12.508 "num_base_bdevs_operational": 3, 00:19:12.508 "base_bdevs_list": [ 00:19:12.508 { 00:19:12.508 "name": "BaseBdev1", 00:19:12.508 "uuid": "08a8a3d7-8c89-467e-b905-d226a896a7be", 00:19:12.508 "is_configured": true, 00:19:12.508 "data_offset": 2048, 00:19:12.508 "data_size": 63488 00:19:12.508 }, 00:19:12.508 { 00:19:12.508 "name": "BaseBdev2", 00:19:12.508 "uuid": "b31b3662-039e-43f8-95ff-b5f49d554cbe", 00:19:12.508 "is_configured": true, 00:19:12.508 "data_offset": 2048, 00:19:12.508 "data_size": 63488 00:19:12.508 }, 00:19:12.508 { 00:19:12.508 "name": "BaseBdev3", 00:19:12.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.508 "is_configured": false, 00:19:12.508 "data_offset": 0, 00:19:12.508 "data_size": 0 00:19:12.508 } 00:19:12.508 ] 00:19:12.508 }' 00:19:12.508 11:01:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:12.508 11:01:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.078 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:13.336 [2024-07-25 11:01:20.329204] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.336 [2024-07-25 11:01:20.329502] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:13.336 [2024-07-25 11:01:20.329531] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:13.336 [2024-07-25 11:01:20.329849] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:19:13.336 [2024-07-25 11:01:20.330089] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:13.336 [2024-07-25 11:01:20.330105] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:13.336 [2024-07-25 11:01:20.330302] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.336 BaseBdev3 00:19:13.336 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:13.336 11:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:13.336 11:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:13.336 11:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:13.336 11:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:13.336 11:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:13.336 11:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:13.594 11:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:13.852 [ 00:19:13.852 { 00:19:13.852 "name": "BaseBdev3", 00:19:13.852 "aliases": [ 00:19:13.852 "0b75250a-4163-47b3-a82e-98845fa0ff9a" 00:19:13.852 ], 00:19:13.852 "product_name": "Malloc disk", 00:19:13.852 "block_size": 512, 00:19:13.852 "num_blocks": 65536, 00:19:13.852 "uuid": "0b75250a-4163-47b3-a82e-98845fa0ff9a", 00:19:13.852 "assigned_rate_limits": { 00:19:13.852 "rw_ios_per_sec": 0, 00:19:13.852 "rw_mbytes_per_sec": 0, 00:19:13.852 "r_mbytes_per_sec": 0, 00:19:13.852 "w_mbytes_per_sec": 0 00:19:13.852 }, 00:19:13.853 "claimed": true, 00:19:13.853 "claim_type": "exclusive_write", 00:19:13.853 "zoned": false, 00:19:13.853 "supported_io_types": { 00:19:13.853 "read": true, 00:19:13.853 "write": true, 00:19:13.853 "unmap": true, 00:19:13.853 "flush": true, 00:19:13.853 "reset": true, 00:19:13.853 "nvme_admin": false, 00:19:13.853 "nvme_io": false, 00:19:13.853 "nvme_io_md": false, 00:19:13.853 "write_zeroes": true, 00:19:13.853 "zcopy": true, 00:19:13.853 "get_zone_info": false, 00:19:13.853 "zone_management": false, 00:19:13.853 "zone_append": false, 00:19:13.853 "compare": false, 00:19:13.853 "compare_and_write": false, 00:19:13.853 "abort": true, 00:19:13.853 "seek_hole": false, 00:19:13.853 "seek_data": false, 00:19:13.853 "copy": true, 00:19:13.853 "nvme_iov_md": false 00:19:13.853 }, 00:19:13.853 "memory_domains": [ 00:19:13.853 { 00:19:13.853 "dma_device_id": "system", 00:19:13.853 "dma_device_type": 1 00:19:13.853 }, 00:19:13.853 { 00:19:13.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.853 "dma_device_type": 2 00:19:13.853 } 00:19:13.853 ], 00:19:13.853 "driver_specific": {} 00:19:13.853 } 00:19:13.853 ] 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.853 11:01:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.112 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:14.112 "name": "Existed_Raid", 00:19:14.112 "uuid": "e3ed938b-bf0a-431b-bba4-b5a6f99227f6", 00:19:14.112 "strip_size_kb": 64, 00:19:14.112 "state": "online", 00:19:14.112 "raid_level": "concat", 00:19:14.112 "superblock": true, 00:19:14.112 "num_base_bdevs": 3, 00:19:14.112 "num_base_bdevs_discovered": 3, 00:19:14.112 "num_base_bdevs_operational": 3, 00:19:14.112 "base_bdevs_list": [ 00:19:14.112 { 00:19:14.112 "name": "BaseBdev1", 00:19:14.112 "uuid": "08a8a3d7-8c89-467e-b905-d226a896a7be", 00:19:14.112 "is_configured": true, 00:19:14.112 "data_offset": 2048, 00:19:14.112 "data_size": 63488 00:19:14.112 }, 00:19:14.112 { 00:19:14.112 "name": "BaseBdev2", 00:19:14.112 "uuid": "b31b3662-039e-43f8-95ff-b5f49d554cbe", 00:19:14.112 "is_configured": true, 00:19:14.112 "data_offset": 2048, 00:19:14.112 "data_size": 63488 00:19:14.112 }, 00:19:14.112 { 00:19:14.112 "name": "BaseBdev3", 00:19:14.112 "uuid": "0b75250a-4163-47b3-a82e-98845fa0ff9a", 00:19:14.112 "is_configured": true, 00:19:14.112 "data_offset": 2048, 00:19:14.112 "data_size": 63488 00:19:14.112 } 00:19:14.112 ] 00:19:14.112 }' 00:19:14.112 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:14.112 11:01:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.678 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:14.678 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:14.678 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:14.678 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:14.679 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:14.679 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:14.679 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:14.679 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:14.937 [2024-07-25 11:01:21.801607] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:14.937 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:14.937 "name": "Existed_Raid", 00:19:14.937 "aliases": [ 00:19:14.937 "e3ed938b-bf0a-431b-bba4-b5a6f99227f6" 00:19:14.937 ], 00:19:14.937 "product_name": "Raid Volume", 00:19:14.937 "block_size": 512, 00:19:14.937 "num_blocks": 190464, 00:19:14.937 "uuid": "e3ed938b-bf0a-431b-bba4-b5a6f99227f6", 00:19:14.937 "assigned_rate_limits": { 00:19:14.937 "rw_ios_per_sec": 0, 00:19:14.937 "rw_mbytes_per_sec": 0, 00:19:14.937 "r_mbytes_per_sec": 0, 00:19:14.937 "w_mbytes_per_sec": 0 00:19:14.937 }, 00:19:14.937 "claimed": false, 00:19:14.937 "zoned": false, 00:19:14.937 "supported_io_types": { 00:19:14.937 "read": true, 00:19:14.937 "write": true, 00:19:14.937 "unmap": true, 00:19:14.937 "flush": true, 00:19:14.937 "reset": true, 00:19:14.937 "nvme_admin": false, 00:19:14.937 "nvme_io": false, 00:19:14.937 "nvme_io_md": false, 00:19:14.937 "write_zeroes": true, 00:19:14.937 "zcopy": false, 00:19:14.937 "get_zone_info": false, 00:19:14.937 "zone_management": false, 00:19:14.937 "zone_append": false, 00:19:14.937 "compare": false, 00:19:14.937 "compare_and_write": false, 00:19:14.937 "abort": false, 00:19:14.937 "seek_hole": false, 00:19:14.937 "seek_data": false, 00:19:14.937 "copy": false, 00:19:14.937 "nvme_iov_md": false 00:19:14.937 }, 00:19:14.937 "memory_domains": [ 00:19:14.937 { 00:19:14.937 "dma_device_id": "system", 00:19:14.937 "dma_device_type": 1 00:19:14.937 }, 00:19:14.937 { 00:19:14.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.937 "dma_device_type": 2 00:19:14.937 }, 00:19:14.937 { 00:19:14.937 "dma_device_id": "system", 00:19:14.937 "dma_device_type": 1 00:19:14.937 }, 00:19:14.937 { 00:19:14.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.937 "dma_device_type": 2 00:19:14.937 }, 00:19:14.937 { 00:19:14.937 "dma_device_id": "system", 00:19:14.937 "dma_device_type": 1 00:19:14.937 }, 00:19:14.937 { 00:19:14.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.937 "dma_device_type": 2 00:19:14.937 } 00:19:14.937 ], 00:19:14.937 "driver_specific": { 00:19:14.937 "raid": { 00:19:14.937 "uuid": "e3ed938b-bf0a-431b-bba4-b5a6f99227f6", 00:19:14.937 "strip_size_kb": 64, 00:19:14.937 "state": "online", 00:19:14.937 "raid_level": "concat", 00:19:14.937 "superblock": true, 00:19:14.937 "num_base_bdevs": 3, 00:19:14.937 "num_base_bdevs_discovered": 3, 00:19:14.937 "num_base_bdevs_operational": 3, 00:19:14.937 "base_bdevs_list": [ 00:19:14.937 { 00:19:14.937 "name": "BaseBdev1", 00:19:14.937 "uuid": "08a8a3d7-8c89-467e-b905-d226a896a7be", 00:19:14.937 "is_configured": true, 00:19:14.937 "data_offset": 2048, 00:19:14.937 "data_size": 63488 00:19:14.937 }, 00:19:14.937 { 00:19:14.937 "name": "BaseBdev2", 00:19:14.937 "uuid": "b31b3662-039e-43f8-95ff-b5f49d554cbe", 00:19:14.937 "is_configured": true, 00:19:14.937 "data_offset": 2048, 00:19:14.937 "data_size": 63488 00:19:14.937 }, 00:19:14.937 { 00:19:14.937 "name": "BaseBdev3", 00:19:14.937 "uuid": "0b75250a-4163-47b3-a82e-98845fa0ff9a", 00:19:14.937 "is_configured": true, 00:19:14.937 "data_offset": 2048, 00:19:14.937 "data_size": 63488 00:19:14.937 } 00:19:14.937 ] 00:19:14.937 } 00:19:14.937 } 00:19:14.937 }' 00:19:14.937 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:14.938 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:14.938 BaseBdev2 00:19:14.938 BaseBdev3' 00:19:14.938 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:14.938 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:14.938 11:01:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:15.195 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:15.195 "name": "BaseBdev1", 00:19:15.195 "aliases": [ 00:19:15.195 "08a8a3d7-8c89-467e-b905-d226a896a7be" 00:19:15.195 ], 00:19:15.195 "product_name": "Malloc disk", 00:19:15.195 "block_size": 512, 00:19:15.195 "num_blocks": 65536, 00:19:15.195 "uuid": "08a8a3d7-8c89-467e-b905-d226a896a7be", 00:19:15.195 "assigned_rate_limits": { 00:19:15.195 "rw_ios_per_sec": 0, 00:19:15.195 "rw_mbytes_per_sec": 0, 00:19:15.195 "r_mbytes_per_sec": 0, 00:19:15.195 "w_mbytes_per_sec": 0 00:19:15.195 }, 00:19:15.195 "claimed": true, 00:19:15.195 "claim_type": "exclusive_write", 00:19:15.195 "zoned": false, 00:19:15.195 "supported_io_types": { 00:19:15.195 "read": true, 00:19:15.195 "write": true, 00:19:15.195 "unmap": true, 00:19:15.195 "flush": true, 00:19:15.195 "reset": true, 00:19:15.195 "nvme_admin": false, 00:19:15.195 "nvme_io": false, 00:19:15.195 "nvme_io_md": false, 00:19:15.195 "write_zeroes": true, 00:19:15.195 "zcopy": true, 00:19:15.195 "get_zone_info": false, 00:19:15.195 "zone_management": false, 00:19:15.195 "zone_append": false, 00:19:15.195 "compare": false, 00:19:15.195 "compare_and_write": false, 00:19:15.195 "abort": true, 00:19:15.195 "seek_hole": false, 00:19:15.195 "seek_data": false, 00:19:15.195 "copy": true, 00:19:15.195 "nvme_iov_md": false 00:19:15.195 }, 00:19:15.195 "memory_domains": [ 00:19:15.195 { 00:19:15.195 "dma_device_id": "system", 00:19:15.195 "dma_device_type": 1 00:19:15.195 }, 00:19:15.195 { 00:19:15.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.195 "dma_device_type": 2 00:19:15.195 } 00:19:15.195 ], 00:19:15.195 "driver_specific": {} 00:19:15.195 }' 00:19:15.195 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:15.195 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:15.196 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:15.196 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:15.196 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:15.196 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:15.196 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:15.454 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:15.454 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:15.454 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:15.454 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:15.454 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:15.454 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:15.454 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:15.454 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:15.713 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:15.713 "name": "BaseBdev2", 00:19:15.713 "aliases": [ 00:19:15.713 "b31b3662-039e-43f8-95ff-b5f49d554cbe" 00:19:15.713 ], 00:19:15.713 "product_name": "Malloc disk", 00:19:15.713 "block_size": 512, 00:19:15.713 "num_blocks": 65536, 00:19:15.713 "uuid": "b31b3662-039e-43f8-95ff-b5f49d554cbe", 00:19:15.713 "assigned_rate_limits": { 00:19:15.713 "rw_ios_per_sec": 0, 00:19:15.713 "rw_mbytes_per_sec": 0, 00:19:15.713 "r_mbytes_per_sec": 0, 00:19:15.713 "w_mbytes_per_sec": 0 00:19:15.713 }, 00:19:15.713 "claimed": true, 00:19:15.713 "claim_type": "exclusive_write", 00:19:15.713 "zoned": false, 00:19:15.713 "supported_io_types": { 00:19:15.713 "read": true, 00:19:15.713 "write": true, 00:19:15.713 "unmap": true, 00:19:15.713 "flush": true, 00:19:15.713 "reset": true, 00:19:15.713 "nvme_admin": false, 00:19:15.713 "nvme_io": false, 00:19:15.713 "nvme_io_md": false, 00:19:15.713 "write_zeroes": true, 00:19:15.713 "zcopy": true, 00:19:15.713 "get_zone_info": false, 00:19:15.713 "zone_management": false, 00:19:15.713 "zone_append": false, 00:19:15.713 "compare": false, 00:19:15.713 "compare_and_write": false, 00:19:15.713 "abort": true, 00:19:15.713 "seek_hole": false, 00:19:15.713 "seek_data": false, 00:19:15.713 "copy": true, 00:19:15.713 "nvme_iov_md": false 00:19:15.713 }, 00:19:15.713 "memory_domains": [ 00:19:15.713 { 00:19:15.713 "dma_device_id": "system", 00:19:15.713 "dma_device_type": 1 00:19:15.713 }, 00:19:15.713 { 00:19:15.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.713 "dma_device_type": 2 00:19:15.713 } 00:19:15.713 ], 00:19:15.713 "driver_specific": {} 00:19:15.713 }' 00:19:15.713 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:15.713 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:15.713 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:15.713 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:15.713 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:15.972 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:15.972 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:15.972 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:15.972 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:15.972 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:15.972 11:01:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:15.972 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:15.972 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:15.972 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:15.972 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:16.230 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:16.230 "name": "BaseBdev3", 00:19:16.230 "aliases": [ 00:19:16.230 "0b75250a-4163-47b3-a82e-98845fa0ff9a" 00:19:16.230 ], 00:19:16.230 "product_name": "Malloc disk", 00:19:16.230 "block_size": 512, 00:19:16.230 "num_blocks": 65536, 00:19:16.230 "uuid": "0b75250a-4163-47b3-a82e-98845fa0ff9a", 00:19:16.230 "assigned_rate_limits": { 00:19:16.230 "rw_ios_per_sec": 0, 00:19:16.230 "rw_mbytes_per_sec": 0, 00:19:16.230 "r_mbytes_per_sec": 0, 00:19:16.230 "w_mbytes_per_sec": 0 00:19:16.230 }, 00:19:16.230 "claimed": true, 00:19:16.230 "claim_type": "exclusive_write", 00:19:16.230 "zoned": false, 00:19:16.230 "supported_io_types": { 00:19:16.230 "read": true, 00:19:16.230 "write": true, 00:19:16.230 "unmap": true, 00:19:16.230 "flush": true, 00:19:16.230 "reset": true, 00:19:16.230 "nvme_admin": false, 00:19:16.230 "nvme_io": false, 00:19:16.230 "nvme_io_md": false, 00:19:16.230 "write_zeroes": true, 00:19:16.230 "zcopy": true, 00:19:16.230 "get_zone_info": false, 00:19:16.230 "zone_management": false, 00:19:16.230 "zone_append": false, 00:19:16.230 "compare": false, 00:19:16.230 "compare_and_write": false, 00:19:16.230 "abort": true, 00:19:16.230 "seek_hole": false, 00:19:16.230 "seek_data": false, 00:19:16.230 "copy": true, 00:19:16.230 "nvme_iov_md": false 00:19:16.230 }, 00:19:16.230 "memory_domains": [ 00:19:16.230 { 00:19:16.230 "dma_device_id": "system", 00:19:16.230 "dma_device_type": 1 00:19:16.230 }, 00:19:16.230 { 00:19:16.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:16.230 "dma_device_type": 2 00:19:16.230 } 00:19:16.230 ], 00:19:16.230 "driver_specific": {} 00:19:16.230 }' 00:19:16.230 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:16.230 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:16.230 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:16.230 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:16.488 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:16.488 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:16.488 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:16.488 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:16.488 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:16.488 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:16.488 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:16.488 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:16.488 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:16.745 [2024-07-25 11:01:23.810793] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:16.745 [2024-07-25 11:01:23.810829] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:16.745 [2024-07-25 11:01:23.810895] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.004 11:01:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:17.004 11:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:17.004 "name": "Existed_Raid", 00:19:17.004 "uuid": "e3ed938b-bf0a-431b-bba4-b5a6f99227f6", 00:19:17.004 "strip_size_kb": 64, 00:19:17.004 "state": "offline", 00:19:17.004 "raid_level": "concat", 00:19:17.004 "superblock": true, 00:19:17.004 "num_base_bdevs": 3, 00:19:17.004 "num_base_bdevs_discovered": 2, 00:19:17.004 "num_base_bdevs_operational": 2, 00:19:17.004 "base_bdevs_list": [ 00:19:17.004 { 00:19:17.004 "name": null, 00:19:17.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:17.004 "is_configured": false, 00:19:17.004 "data_offset": 2048, 00:19:17.004 "data_size": 63488 00:19:17.004 }, 00:19:17.004 { 00:19:17.004 "name": "BaseBdev2", 00:19:17.004 "uuid": "b31b3662-039e-43f8-95ff-b5f49d554cbe", 00:19:17.004 "is_configured": true, 00:19:17.004 "data_offset": 2048, 00:19:17.004 "data_size": 63488 00:19:17.004 }, 00:19:17.004 { 00:19:17.004 "name": "BaseBdev3", 00:19:17.004 "uuid": "0b75250a-4163-47b3-a82e-98845fa0ff9a", 00:19:17.004 "is_configured": true, 00:19:17.004 "data_offset": 2048, 00:19:17.004 "data_size": 63488 00:19:17.004 } 00:19:17.004 ] 00:19:17.004 }' 00:19:17.004 11:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:17.004 11:01:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.571 11:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:17.571 11:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:17.571 11:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:17.571 11:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:17.829 11:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:17.829 11:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:17.829 11:01:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:18.087 [2024-07-25 11:01:25.115055] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:18.344 11:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:18.344 11:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:18.344 11:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.345 11:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:18.603 11:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:18.603 11:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:18.603 11:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:18.603 [2024-07-25 11:01:25.714965] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:18.603 [2024-07-25 11:01:25.715029] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:18.867 11:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:18.867 11:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:18.867 11:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.867 11:01:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:19.127 11:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:19.127 11:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:19.127 11:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:19.127 11:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:19.127 11:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:19.127 11:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:19.385 BaseBdev2 00:19:19.385 11:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:19.385 11:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:19.385 11:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:19.385 11:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:19.385 11:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:19.385 11:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:19.385 11:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:19.643 11:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:19.902 [ 00:19:19.902 { 00:19:19.902 "name": "BaseBdev2", 00:19:19.902 "aliases": [ 00:19:19.902 "1f29e4d5-5a88-4979-a850-86f592126679" 00:19:19.902 ], 00:19:19.902 "product_name": "Malloc disk", 00:19:19.902 "block_size": 512, 00:19:19.902 "num_blocks": 65536, 00:19:19.902 "uuid": "1f29e4d5-5a88-4979-a850-86f592126679", 00:19:19.902 "assigned_rate_limits": { 00:19:19.902 "rw_ios_per_sec": 0, 00:19:19.902 "rw_mbytes_per_sec": 0, 00:19:19.902 "r_mbytes_per_sec": 0, 00:19:19.902 "w_mbytes_per_sec": 0 00:19:19.902 }, 00:19:19.902 "claimed": false, 00:19:19.902 "zoned": false, 00:19:19.902 "supported_io_types": { 00:19:19.902 "read": true, 00:19:19.902 "write": true, 00:19:19.902 "unmap": true, 00:19:19.902 "flush": true, 00:19:19.902 "reset": true, 00:19:19.902 "nvme_admin": false, 00:19:19.902 "nvme_io": false, 00:19:19.902 "nvme_io_md": false, 00:19:19.902 "write_zeroes": true, 00:19:19.902 "zcopy": true, 00:19:19.902 "get_zone_info": false, 00:19:19.902 "zone_management": false, 00:19:19.902 "zone_append": false, 00:19:19.902 "compare": false, 00:19:19.902 "compare_and_write": false, 00:19:19.902 "abort": true, 00:19:19.902 "seek_hole": false, 00:19:19.902 "seek_data": false, 00:19:19.902 "copy": true, 00:19:19.902 "nvme_iov_md": false 00:19:19.902 }, 00:19:19.902 "memory_domains": [ 00:19:19.902 { 00:19:19.902 "dma_device_id": "system", 00:19:19.902 "dma_device_type": 1 00:19:19.902 }, 00:19:19.902 { 00:19:19.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:19.902 "dma_device_type": 2 00:19:19.902 } 00:19:19.902 ], 00:19:19.902 "driver_specific": {} 00:19:19.902 } 00:19:19.902 ] 00:19:19.902 11:01:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:19.902 11:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:19.902 11:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:19.902 11:01:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:20.160 BaseBdev3 00:19:20.161 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:20.161 11:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:19:20.161 11:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:20.161 11:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:20.161 11:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:20.161 11:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:20.161 11:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:20.448 11:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:20.448 [ 00:19:20.448 { 00:19:20.448 "name": "BaseBdev3", 00:19:20.448 "aliases": [ 00:19:20.448 "07a95b77-1038-4f97-8c54-9040266679d9" 00:19:20.448 ], 00:19:20.448 "product_name": "Malloc disk", 00:19:20.448 "block_size": 512, 00:19:20.448 "num_blocks": 65536, 00:19:20.448 "uuid": "07a95b77-1038-4f97-8c54-9040266679d9", 00:19:20.448 "assigned_rate_limits": { 00:19:20.449 "rw_ios_per_sec": 0, 00:19:20.449 "rw_mbytes_per_sec": 0, 00:19:20.449 "r_mbytes_per_sec": 0, 00:19:20.449 "w_mbytes_per_sec": 0 00:19:20.449 }, 00:19:20.449 "claimed": false, 00:19:20.449 "zoned": false, 00:19:20.449 "supported_io_types": { 00:19:20.449 "read": true, 00:19:20.449 "write": true, 00:19:20.449 "unmap": true, 00:19:20.449 "flush": true, 00:19:20.449 "reset": true, 00:19:20.449 "nvme_admin": false, 00:19:20.449 "nvme_io": false, 00:19:20.449 "nvme_io_md": false, 00:19:20.449 "write_zeroes": true, 00:19:20.449 "zcopy": true, 00:19:20.449 "get_zone_info": false, 00:19:20.449 "zone_management": false, 00:19:20.449 "zone_append": false, 00:19:20.449 "compare": false, 00:19:20.449 "compare_and_write": false, 00:19:20.449 "abort": true, 00:19:20.449 "seek_hole": false, 00:19:20.449 "seek_data": false, 00:19:20.449 "copy": true, 00:19:20.449 "nvme_iov_md": false 00:19:20.449 }, 00:19:20.449 "memory_domains": [ 00:19:20.449 { 00:19:20.449 "dma_device_id": "system", 00:19:20.449 "dma_device_type": 1 00:19:20.449 }, 00:19:20.449 { 00:19:20.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:20.449 "dma_device_type": 2 00:19:20.449 } 00:19:20.449 ], 00:19:20.449 "driver_specific": {} 00:19:20.449 } 00:19:20.449 ] 00:19:20.449 11:01:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:20.449 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:20.449 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:20.449 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:20.748 [2024-07-25 11:01:27.741948] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:20.748 [2024-07-25 11:01:27.741998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:20.748 [2024-07-25 11:01:27.742038] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:20.748 [2024-07-25 11:01:27.744353] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:20.748 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:20.748 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:20.748 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:20.748 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:20.748 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:20.748 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:20.748 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.748 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.748 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.748 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.748 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.748 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:21.006 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.006 "name": "Existed_Raid", 00:19:21.006 "uuid": "fc366107-c1d7-4593-9fcd-8234926b754f", 00:19:21.006 "strip_size_kb": 64, 00:19:21.006 "state": "configuring", 00:19:21.006 "raid_level": "concat", 00:19:21.006 "superblock": true, 00:19:21.006 "num_base_bdevs": 3, 00:19:21.006 "num_base_bdevs_discovered": 2, 00:19:21.006 "num_base_bdevs_operational": 3, 00:19:21.006 "base_bdevs_list": [ 00:19:21.006 { 00:19:21.006 "name": "BaseBdev1", 00:19:21.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.006 "is_configured": false, 00:19:21.006 "data_offset": 0, 00:19:21.006 "data_size": 0 00:19:21.006 }, 00:19:21.006 { 00:19:21.006 "name": "BaseBdev2", 00:19:21.006 "uuid": "1f29e4d5-5a88-4979-a850-86f592126679", 00:19:21.006 "is_configured": true, 00:19:21.006 "data_offset": 2048, 00:19:21.006 "data_size": 63488 00:19:21.006 }, 00:19:21.006 { 00:19:21.006 "name": "BaseBdev3", 00:19:21.006 "uuid": "07a95b77-1038-4f97-8c54-9040266679d9", 00:19:21.006 "is_configured": true, 00:19:21.006 "data_offset": 2048, 00:19:21.006 "data_size": 63488 00:19:21.006 } 00:19:21.006 ] 00:19:21.006 }' 00:19:21.006 11:01:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.006 11:01:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.574 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:21.835 [2024-07-25 11:01:28.772896] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:21.835 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:21.835 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:21.835 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:21.835 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:21.835 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:21.835 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:21.835 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:21.835 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:21.835 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:21.835 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:21.835 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.835 11:01:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:22.094 11:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:22.094 "name": "Existed_Raid", 00:19:22.094 "uuid": "fc366107-c1d7-4593-9fcd-8234926b754f", 00:19:22.094 "strip_size_kb": 64, 00:19:22.094 "state": "configuring", 00:19:22.094 "raid_level": "concat", 00:19:22.094 "superblock": true, 00:19:22.094 "num_base_bdevs": 3, 00:19:22.094 "num_base_bdevs_discovered": 1, 00:19:22.094 "num_base_bdevs_operational": 3, 00:19:22.094 "base_bdevs_list": [ 00:19:22.094 { 00:19:22.094 "name": "BaseBdev1", 00:19:22.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.094 "is_configured": false, 00:19:22.094 "data_offset": 0, 00:19:22.094 "data_size": 0 00:19:22.094 }, 00:19:22.094 { 00:19:22.094 "name": null, 00:19:22.094 "uuid": "1f29e4d5-5a88-4979-a850-86f592126679", 00:19:22.094 "is_configured": false, 00:19:22.094 "data_offset": 2048, 00:19:22.094 "data_size": 63488 00:19:22.094 }, 00:19:22.094 { 00:19:22.094 "name": "BaseBdev3", 00:19:22.094 "uuid": "07a95b77-1038-4f97-8c54-9040266679d9", 00:19:22.094 "is_configured": true, 00:19:22.094 "data_offset": 2048, 00:19:22.094 "data_size": 63488 00:19:22.094 } 00:19:22.094 ] 00:19:22.094 }' 00:19:22.094 11:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:22.094 11:01:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.671 11:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.671 11:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:22.930 11:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:22.930 11:01:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:23.188 [2024-07-25 11:01:30.091343] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.188 BaseBdev1 00:19:23.188 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:23.188 11:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:23.188 11:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:23.188 11:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:23.188 11:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:23.188 11:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:23.188 11:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:23.446 11:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:23.446 [ 00:19:23.446 { 00:19:23.446 "name": "BaseBdev1", 00:19:23.446 "aliases": [ 00:19:23.446 "351e039d-d5bc-42d4-954a-a64711ab3a76" 00:19:23.446 ], 00:19:23.446 "product_name": "Malloc disk", 00:19:23.446 "block_size": 512, 00:19:23.446 "num_blocks": 65536, 00:19:23.446 "uuid": "351e039d-d5bc-42d4-954a-a64711ab3a76", 00:19:23.446 "assigned_rate_limits": { 00:19:23.446 "rw_ios_per_sec": 0, 00:19:23.446 "rw_mbytes_per_sec": 0, 00:19:23.446 "r_mbytes_per_sec": 0, 00:19:23.446 "w_mbytes_per_sec": 0 00:19:23.446 }, 00:19:23.446 "claimed": true, 00:19:23.446 "claim_type": "exclusive_write", 00:19:23.446 "zoned": false, 00:19:23.446 "supported_io_types": { 00:19:23.446 "read": true, 00:19:23.446 "write": true, 00:19:23.446 "unmap": true, 00:19:23.446 "flush": true, 00:19:23.446 "reset": true, 00:19:23.446 "nvme_admin": false, 00:19:23.446 "nvme_io": false, 00:19:23.446 "nvme_io_md": false, 00:19:23.446 "write_zeroes": true, 00:19:23.446 "zcopy": true, 00:19:23.446 "get_zone_info": false, 00:19:23.446 "zone_management": false, 00:19:23.446 "zone_append": false, 00:19:23.446 "compare": false, 00:19:23.446 "compare_and_write": false, 00:19:23.446 "abort": true, 00:19:23.446 "seek_hole": false, 00:19:23.446 "seek_data": false, 00:19:23.446 "copy": true, 00:19:23.446 "nvme_iov_md": false 00:19:23.446 }, 00:19:23.446 "memory_domains": [ 00:19:23.446 { 00:19:23.446 "dma_device_id": "system", 00:19:23.446 "dma_device_type": 1 00:19:23.446 }, 00:19:23.446 { 00:19:23.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.446 "dma_device_type": 2 00:19:23.446 } 00:19:23.446 ], 00:19:23.446 "driver_specific": {} 00:19:23.446 } 00:19:23.446 ] 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:23.705 "name": "Existed_Raid", 00:19:23.705 "uuid": "fc366107-c1d7-4593-9fcd-8234926b754f", 00:19:23.705 "strip_size_kb": 64, 00:19:23.705 "state": "configuring", 00:19:23.705 "raid_level": "concat", 00:19:23.705 "superblock": true, 00:19:23.705 "num_base_bdevs": 3, 00:19:23.705 "num_base_bdevs_discovered": 2, 00:19:23.705 "num_base_bdevs_operational": 3, 00:19:23.705 "base_bdevs_list": [ 00:19:23.705 { 00:19:23.705 "name": "BaseBdev1", 00:19:23.705 "uuid": "351e039d-d5bc-42d4-954a-a64711ab3a76", 00:19:23.705 "is_configured": true, 00:19:23.705 "data_offset": 2048, 00:19:23.705 "data_size": 63488 00:19:23.705 }, 00:19:23.705 { 00:19:23.705 "name": null, 00:19:23.705 "uuid": "1f29e4d5-5a88-4979-a850-86f592126679", 00:19:23.705 "is_configured": false, 00:19:23.705 "data_offset": 2048, 00:19:23.705 "data_size": 63488 00:19:23.705 }, 00:19:23.705 { 00:19:23.705 "name": "BaseBdev3", 00:19:23.705 "uuid": "07a95b77-1038-4f97-8c54-9040266679d9", 00:19:23.705 "is_configured": true, 00:19:23.705 "data_offset": 2048, 00:19:23.705 "data_size": 63488 00:19:23.705 } 00:19:23.705 ] 00:19:23.705 }' 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:23.705 11:01:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.271 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.271 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:24.528 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:24.528 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:24.786 [2024-07-25 11:01:31.820159] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:24.786 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:24.786 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:24.786 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:24.786 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:24.786 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:24.786 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:24.786 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:24.786 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:24.786 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:24.786 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:24.786 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.786 11:01:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.043 11:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.043 "name": "Existed_Raid", 00:19:25.043 "uuid": "fc366107-c1d7-4593-9fcd-8234926b754f", 00:19:25.043 "strip_size_kb": 64, 00:19:25.043 "state": "configuring", 00:19:25.043 "raid_level": "concat", 00:19:25.043 "superblock": true, 00:19:25.043 "num_base_bdevs": 3, 00:19:25.043 "num_base_bdevs_discovered": 1, 00:19:25.043 "num_base_bdevs_operational": 3, 00:19:25.043 "base_bdevs_list": [ 00:19:25.043 { 00:19:25.043 "name": "BaseBdev1", 00:19:25.043 "uuid": "351e039d-d5bc-42d4-954a-a64711ab3a76", 00:19:25.043 "is_configured": true, 00:19:25.043 "data_offset": 2048, 00:19:25.043 "data_size": 63488 00:19:25.043 }, 00:19:25.043 { 00:19:25.043 "name": null, 00:19:25.043 "uuid": "1f29e4d5-5a88-4979-a850-86f592126679", 00:19:25.043 "is_configured": false, 00:19:25.043 "data_offset": 2048, 00:19:25.043 "data_size": 63488 00:19:25.043 }, 00:19:25.043 { 00:19:25.043 "name": null, 00:19:25.043 "uuid": "07a95b77-1038-4f97-8c54-9040266679d9", 00:19:25.043 "is_configured": false, 00:19:25.043 "data_offset": 2048, 00:19:25.043 "data_size": 63488 00:19:25.043 } 00:19:25.043 ] 00:19:25.043 }' 00:19:25.043 11:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.043 11:01:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.608 11:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.608 11:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:25.866 11:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:25.866 11:01:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:26.125 [2024-07-25 11:01:33.055509] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:26.125 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:26.125 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:26.125 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:26.125 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:26.125 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:26.125 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:26.125 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.125 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.125 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.125 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.125 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.125 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:26.382 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:26.382 "name": "Existed_Raid", 00:19:26.382 "uuid": "fc366107-c1d7-4593-9fcd-8234926b754f", 00:19:26.382 "strip_size_kb": 64, 00:19:26.382 "state": "configuring", 00:19:26.382 "raid_level": "concat", 00:19:26.382 "superblock": true, 00:19:26.382 "num_base_bdevs": 3, 00:19:26.382 "num_base_bdevs_discovered": 2, 00:19:26.382 "num_base_bdevs_operational": 3, 00:19:26.382 "base_bdevs_list": [ 00:19:26.382 { 00:19:26.382 "name": "BaseBdev1", 00:19:26.382 "uuid": "351e039d-d5bc-42d4-954a-a64711ab3a76", 00:19:26.382 "is_configured": true, 00:19:26.382 "data_offset": 2048, 00:19:26.382 "data_size": 63488 00:19:26.382 }, 00:19:26.382 { 00:19:26.382 "name": null, 00:19:26.382 "uuid": "1f29e4d5-5a88-4979-a850-86f592126679", 00:19:26.382 "is_configured": false, 00:19:26.382 "data_offset": 2048, 00:19:26.382 "data_size": 63488 00:19:26.382 }, 00:19:26.382 { 00:19:26.382 "name": "BaseBdev3", 00:19:26.382 "uuid": "07a95b77-1038-4f97-8c54-9040266679d9", 00:19:26.382 "is_configured": true, 00:19:26.382 "data_offset": 2048, 00:19:26.382 "data_size": 63488 00:19:26.382 } 00:19:26.382 ] 00:19:26.382 }' 00:19:26.382 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:26.382 11:01:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.949 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.949 11:01:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:27.209 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:27.209 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:27.209 [2024-07-25 11:01:34.314924] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:27.468 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:27.468 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:27.468 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:27.468 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:27.468 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:27.468 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:27.468 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.468 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.468 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.468 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.468 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.468 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.725 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.725 "name": "Existed_Raid", 00:19:27.725 "uuid": "fc366107-c1d7-4593-9fcd-8234926b754f", 00:19:27.725 "strip_size_kb": 64, 00:19:27.725 "state": "configuring", 00:19:27.725 "raid_level": "concat", 00:19:27.725 "superblock": true, 00:19:27.725 "num_base_bdevs": 3, 00:19:27.725 "num_base_bdevs_discovered": 1, 00:19:27.725 "num_base_bdevs_operational": 3, 00:19:27.725 "base_bdevs_list": [ 00:19:27.725 { 00:19:27.725 "name": null, 00:19:27.725 "uuid": "351e039d-d5bc-42d4-954a-a64711ab3a76", 00:19:27.726 "is_configured": false, 00:19:27.726 "data_offset": 2048, 00:19:27.726 "data_size": 63488 00:19:27.726 }, 00:19:27.726 { 00:19:27.726 "name": null, 00:19:27.726 "uuid": "1f29e4d5-5a88-4979-a850-86f592126679", 00:19:27.726 "is_configured": false, 00:19:27.726 "data_offset": 2048, 00:19:27.726 "data_size": 63488 00:19:27.726 }, 00:19:27.726 { 00:19:27.726 "name": "BaseBdev3", 00:19:27.726 "uuid": "07a95b77-1038-4f97-8c54-9040266679d9", 00:19:27.726 "is_configured": true, 00:19:27.726 "data_offset": 2048, 00:19:27.726 "data_size": 63488 00:19:27.726 } 00:19:27.726 ] 00:19:27.726 }' 00:19:27.726 11:01:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.726 11:01:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.289 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.289 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:28.545 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:28.545 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:28.803 [2024-07-25 11:01:35.684520] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:28.803 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:28.803 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.803 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:28.803 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:28.803 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:28.803 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:28.803 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.803 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.803 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.803 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.803 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.803 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.065 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.065 "name": "Existed_Raid", 00:19:29.065 "uuid": "fc366107-c1d7-4593-9fcd-8234926b754f", 00:19:29.065 "strip_size_kb": 64, 00:19:29.065 "state": "configuring", 00:19:29.065 "raid_level": "concat", 00:19:29.065 "superblock": true, 00:19:29.065 "num_base_bdevs": 3, 00:19:29.065 "num_base_bdevs_discovered": 2, 00:19:29.065 "num_base_bdevs_operational": 3, 00:19:29.065 "base_bdevs_list": [ 00:19:29.065 { 00:19:29.065 "name": null, 00:19:29.065 "uuid": "351e039d-d5bc-42d4-954a-a64711ab3a76", 00:19:29.065 "is_configured": false, 00:19:29.065 "data_offset": 2048, 00:19:29.065 "data_size": 63488 00:19:29.065 }, 00:19:29.065 { 00:19:29.065 "name": "BaseBdev2", 00:19:29.065 "uuid": "1f29e4d5-5a88-4979-a850-86f592126679", 00:19:29.065 "is_configured": true, 00:19:29.065 "data_offset": 2048, 00:19:29.065 "data_size": 63488 00:19:29.065 }, 00:19:29.065 { 00:19:29.065 "name": "BaseBdev3", 00:19:29.065 "uuid": "07a95b77-1038-4f97-8c54-9040266679d9", 00:19:29.065 "is_configured": true, 00:19:29.065 "data_offset": 2048, 00:19:29.066 "data_size": 63488 00:19:29.066 } 00:19:29.066 ] 00:19:29.066 }' 00:19:29.066 11:01:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.066 11:01:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:29.632 11:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.632 11:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:29.632 11:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:29.632 11:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.632 11:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:29.890 11:01:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 351e039d-d5bc-42d4-954a-a64711ab3a76 00:19:30.148 [2024-07-25 11:01:37.199563] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:30.148 [2024-07-25 11:01:37.199833] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:30.148 [2024-07-25 11:01:37.199857] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:30.148 [2024-07-25 11:01:37.200215] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:19:30.148 [2024-07-25 11:01:37.200449] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:30.148 [2024-07-25 11:01:37.200464] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:30.148 NewBaseBdev 00:19:30.148 [2024-07-25 11:01:37.200652] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.148 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:30.148 11:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:19:30.148 11:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:30.148 11:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:19:30.148 11:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:30.148 11:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:30.148 11:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:30.406 11:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:30.664 [ 00:19:30.664 { 00:19:30.664 "name": "NewBaseBdev", 00:19:30.664 "aliases": [ 00:19:30.664 "351e039d-d5bc-42d4-954a-a64711ab3a76" 00:19:30.664 ], 00:19:30.664 "product_name": "Malloc disk", 00:19:30.664 "block_size": 512, 00:19:30.664 "num_blocks": 65536, 00:19:30.664 "uuid": "351e039d-d5bc-42d4-954a-a64711ab3a76", 00:19:30.664 "assigned_rate_limits": { 00:19:30.664 "rw_ios_per_sec": 0, 00:19:30.664 "rw_mbytes_per_sec": 0, 00:19:30.664 "r_mbytes_per_sec": 0, 00:19:30.664 "w_mbytes_per_sec": 0 00:19:30.664 }, 00:19:30.664 "claimed": true, 00:19:30.664 "claim_type": "exclusive_write", 00:19:30.664 "zoned": false, 00:19:30.664 "supported_io_types": { 00:19:30.664 "read": true, 00:19:30.664 "write": true, 00:19:30.664 "unmap": true, 00:19:30.664 "flush": true, 00:19:30.664 "reset": true, 00:19:30.665 "nvme_admin": false, 00:19:30.665 "nvme_io": false, 00:19:30.665 "nvme_io_md": false, 00:19:30.665 "write_zeroes": true, 00:19:30.665 "zcopy": true, 00:19:30.665 "get_zone_info": false, 00:19:30.665 "zone_management": false, 00:19:30.665 "zone_append": false, 00:19:30.665 "compare": false, 00:19:30.665 "compare_and_write": false, 00:19:30.665 "abort": true, 00:19:30.665 "seek_hole": false, 00:19:30.665 "seek_data": false, 00:19:30.665 "copy": true, 00:19:30.665 "nvme_iov_md": false 00:19:30.665 }, 00:19:30.665 "memory_domains": [ 00:19:30.665 { 00:19:30.665 "dma_device_id": "system", 00:19:30.665 "dma_device_type": 1 00:19:30.665 }, 00:19:30.665 { 00:19:30.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.665 "dma_device_type": 2 00:19:30.665 } 00:19:30.665 ], 00:19:30.665 "driver_specific": {} 00:19:30.665 } 00:19:30.665 ] 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.665 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.923 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.923 "name": "Existed_Raid", 00:19:30.923 "uuid": "fc366107-c1d7-4593-9fcd-8234926b754f", 00:19:30.923 "strip_size_kb": 64, 00:19:30.923 "state": "online", 00:19:30.923 "raid_level": "concat", 00:19:30.923 "superblock": true, 00:19:30.923 "num_base_bdevs": 3, 00:19:30.923 "num_base_bdevs_discovered": 3, 00:19:30.923 "num_base_bdevs_operational": 3, 00:19:30.923 "base_bdevs_list": [ 00:19:30.923 { 00:19:30.923 "name": "NewBaseBdev", 00:19:30.923 "uuid": "351e039d-d5bc-42d4-954a-a64711ab3a76", 00:19:30.923 "is_configured": true, 00:19:30.923 "data_offset": 2048, 00:19:30.923 "data_size": 63488 00:19:30.923 }, 00:19:30.923 { 00:19:30.923 "name": "BaseBdev2", 00:19:30.923 "uuid": "1f29e4d5-5a88-4979-a850-86f592126679", 00:19:30.923 "is_configured": true, 00:19:30.923 "data_offset": 2048, 00:19:30.923 "data_size": 63488 00:19:30.923 }, 00:19:30.923 { 00:19:30.923 "name": "BaseBdev3", 00:19:30.923 "uuid": "07a95b77-1038-4f97-8c54-9040266679d9", 00:19:30.923 "is_configured": true, 00:19:30.923 "data_offset": 2048, 00:19:30.923 "data_size": 63488 00:19:30.923 } 00:19:30.923 ] 00:19:30.923 }' 00:19:30.923 11:01:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.923 11:01:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.488 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:31.488 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:31.488 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:31.488 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:31.488 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:31.488 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:31.488 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:31.488 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:31.746 [2024-07-25 11:01:38.684034] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.746 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:31.746 "name": "Existed_Raid", 00:19:31.746 "aliases": [ 00:19:31.746 "fc366107-c1d7-4593-9fcd-8234926b754f" 00:19:31.746 ], 00:19:31.746 "product_name": "Raid Volume", 00:19:31.746 "block_size": 512, 00:19:31.746 "num_blocks": 190464, 00:19:31.746 "uuid": "fc366107-c1d7-4593-9fcd-8234926b754f", 00:19:31.746 "assigned_rate_limits": { 00:19:31.746 "rw_ios_per_sec": 0, 00:19:31.746 "rw_mbytes_per_sec": 0, 00:19:31.746 "r_mbytes_per_sec": 0, 00:19:31.746 "w_mbytes_per_sec": 0 00:19:31.746 }, 00:19:31.746 "claimed": false, 00:19:31.746 "zoned": false, 00:19:31.746 "supported_io_types": { 00:19:31.746 "read": true, 00:19:31.746 "write": true, 00:19:31.746 "unmap": true, 00:19:31.746 "flush": true, 00:19:31.746 "reset": true, 00:19:31.746 "nvme_admin": false, 00:19:31.746 "nvme_io": false, 00:19:31.746 "nvme_io_md": false, 00:19:31.746 "write_zeroes": true, 00:19:31.746 "zcopy": false, 00:19:31.746 "get_zone_info": false, 00:19:31.746 "zone_management": false, 00:19:31.746 "zone_append": false, 00:19:31.746 "compare": false, 00:19:31.746 "compare_and_write": false, 00:19:31.746 "abort": false, 00:19:31.746 "seek_hole": false, 00:19:31.746 "seek_data": false, 00:19:31.746 "copy": false, 00:19:31.746 "nvme_iov_md": false 00:19:31.746 }, 00:19:31.746 "memory_domains": [ 00:19:31.746 { 00:19:31.746 "dma_device_id": "system", 00:19:31.746 "dma_device_type": 1 00:19:31.746 }, 00:19:31.746 { 00:19:31.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.746 "dma_device_type": 2 00:19:31.746 }, 00:19:31.746 { 00:19:31.746 "dma_device_id": "system", 00:19:31.746 "dma_device_type": 1 00:19:31.746 }, 00:19:31.746 { 00:19:31.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.746 "dma_device_type": 2 00:19:31.746 }, 00:19:31.746 { 00:19:31.746 "dma_device_id": "system", 00:19:31.746 "dma_device_type": 1 00:19:31.746 }, 00:19:31.746 { 00:19:31.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.746 "dma_device_type": 2 00:19:31.746 } 00:19:31.746 ], 00:19:31.746 "driver_specific": { 00:19:31.746 "raid": { 00:19:31.746 "uuid": "fc366107-c1d7-4593-9fcd-8234926b754f", 00:19:31.746 "strip_size_kb": 64, 00:19:31.746 "state": "online", 00:19:31.746 "raid_level": "concat", 00:19:31.746 "superblock": true, 00:19:31.746 "num_base_bdevs": 3, 00:19:31.746 "num_base_bdevs_discovered": 3, 00:19:31.746 "num_base_bdevs_operational": 3, 00:19:31.746 "base_bdevs_list": [ 00:19:31.746 { 00:19:31.746 "name": "NewBaseBdev", 00:19:31.746 "uuid": "351e039d-d5bc-42d4-954a-a64711ab3a76", 00:19:31.746 "is_configured": true, 00:19:31.746 "data_offset": 2048, 00:19:31.746 "data_size": 63488 00:19:31.746 }, 00:19:31.746 { 00:19:31.746 "name": "BaseBdev2", 00:19:31.746 "uuid": "1f29e4d5-5a88-4979-a850-86f592126679", 00:19:31.746 "is_configured": true, 00:19:31.746 "data_offset": 2048, 00:19:31.746 "data_size": 63488 00:19:31.746 }, 00:19:31.746 { 00:19:31.746 "name": "BaseBdev3", 00:19:31.746 "uuid": "07a95b77-1038-4f97-8c54-9040266679d9", 00:19:31.746 "is_configured": true, 00:19:31.746 "data_offset": 2048, 00:19:31.746 "data_size": 63488 00:19:31.746 } 00:19:31.746 ] 00:19:31.746 } 00:19:31.746 } 00:19:31.746 }' 00:19:31.746 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:31.746 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:31.746 BaseBdev2 00:19:31.746 BaseBdev3' 00:19:31.746 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:31.746 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:31.746 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:32.004 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:32.004 "name": "NewBaseBdev", 00:19:32.004 "aliases": [ 00:19:32.004 "351e039d-d5bc-42d4-954a-a64711ab3a76" 00:19:32.004 ], 00:19:32.004 "product_name": "Malloc disk", 00:19:32.004 "block_size": 512, 00:19:32.004 "num_blocks": 65536, 00:19:32.004 "uuid": "351e039d-d5bc-42d4-954a-a64711ab3a76", 00:19:32.004 "assigned_rate_limits": { 00:19:32.004 "rw_ios_per_sec": 0, 00:19:32.004 "rw_mbytes_per_sec": 0, 00:19:32.004 "r_mbytes_per_sec": 0, 00:19:32.004 "w_mbytes_per_sec": 0 00:19:32.004 }, 00:19:32.004 "claimed": true, 00:19:32.004 "claim_type": "exclusive_write", 00:19:32.004 "zoned": false, 00:19:32.004 "supported_io_types": { 00:19:32.004 "read": true, 00:19:32.004 "write": true, 00:19:32.004 "unmap": true, 00:19:32.004 "flush": true, 00:19:32.004 "reset": true, 00:19:32.004 "nvme_admin": false, 00:19:32.004 "nvme_io": false, 00:19:32.004 "nvme_io_md": false, 00:19:32.004 "write_zeroes": true, 00:19:32.004 "zcopy": true, 00:19:32.004 "get_zone_info": false, 00:19:32.004 "zone_management": false, 00:19:32.004 "zone_append": false, 00:19:32.004 "compare": false, 00:19:32.004 "compare_and_write": false, 00:19:32.004 "abort": true, 00:19:32.004 "seek_hole": false, 00:19:32.004 "seek_data": false, 00:19:32.004 "copy": true, 00:19:32.004 "nvme_iov_md": false 00:19:32.004 }, 00:19:32.004 "memory_domains": [ 00:19:32.004 { 00:19:32.004 "dma_device_id": "system", 00:19:32.004 "dma_device_type": 1 00:19:32.004 }, 00:19:32.004 { 00:19:32.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.004 "dma_device_type": 2 00:19:32.004 } 00:19:32.004 ], 00:19:32.004 "driver_specific": {} 00:19:32.004 }' 00:19:32.004 11:01:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.004 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.004 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:32.004 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.004 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.263 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:32.263 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.263 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.263 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:32.263 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.263 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.263 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:32.263 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:32.263 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:32.263 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:32.521 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:32.521 "name": "BaseBdev2", 00:19:32.521 "aliases": [ 00:19:32.521 "1f29e4d5-5a88-4979-a850-86f592126679" 00:19:32.521 ], 00:19:32.521 "product_name": "Malloc disk", 00:19:32.521 "block_size": 512, 00:19:32.521 "num_blocks": 65536, 00:19:32.521 "uuid": "1f29e4d5-5a88-4979-a850-86f592126679", 00:19:32.521 "assigned_rate_limits": { 00:19:32.521 "rw_ios_per_sec": 0, 00:19:32.521 "rw_mbytes_per_sec": 0, 00:19:32.521 "r_mbytes_per_sec": 0, 00:19:32.521 "w_mbytes_per_sec": 0 00:19:32.521 }, 00:19:32.521 "claimed": true, 00:19:32.521 "claim_type": "exclusive_write", 00:19:32.521 "zoned": false, 00:19:32.521 "supported_io_types": { 00:19:32.521 "read": true, 00:19:32.521 "write": true, 00:19:32.521 "unmap": true, 00:19:32.521 "flush": true, 00:19:32.521 "reset": true, 00:19:32.521 "nvme_admin": false, 00:19:32.521 "nvme_io": false, 00:19:32.521 "nvme_io_md": false, 00:19:32.521 "write_zeroes": true, 00:19:32.521 "zcopy": true, 00:19:32.521 "get_zone_info": false, 00:19:32.521 "zone_management": false, 00:19:32.522 "zone_append": false, 00:19:32.522 "compare": false, 00:19:32.522 "compare_and_write": false, 00:19:32.522 "abort": true, 00:19:32.522 "seek_hole": false, 00:19:32.522 "seek_data": false, 00:19:32.522 "copy": true, 00:19:32.522 "nvme_iov_md": false 00:19:32.522 }, 00:19:32.522 "memory_domains": [ 00:19:32.522 { 00:19:32.522 "dma_device_id": "system", 00:19:32.522 "dma_device_type": 1 00:19:32.522 }, 00:19:32.522 { 00:19:32.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.522 "dma_device_type": 2 00:19:32.522 } 00:19:32.522 ], 00:19:32.522 "driver_specific": {} 00:19:32.522 }' 00:19:32.522 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.522 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.522 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:32.522 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.779 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.779 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:32.779 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.779 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.779 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:32.779 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.779 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.779 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:32.779 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:32.779 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:32.779 11:01:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:33.036 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:33.036 "name": "BaseBdev3", 00:19:33.036 "aliases": [ 00:19:33.036 "07a95b77-1038-4f97-8c54-9040266679d9" 00:19:33.036 ], 00:19:33.036 "product_name": "Malloc disk", 00:19:33.036 "block_size": 512, 00:19:33.036 "num_blocks": 65536, 00:19:33.036 "uuid": "07a95b77-1038-4f97-8c54-9040266679d9", 00:19:33.036 "assigned_rate_limits": { 00:19:33.036 "rw_ios_per_sec": 0, 00:19:33.036 "rw_mbytes_per_sec": 0, 00:19:33.036 "r_mbytes_per_sec": 0, 00:19:33.036 "w_mbytes_per_sec": 0 00:19:33.036 }, 00:19:33.036 "claimed": true, 00:19:33.036 "claim_type": "exclusive_write", 00:19:33.036 "zoned": false, 00:19:33.036 "supported_io_types": { 00:19:33.036 "read": true, 00:19:33.036 "write": true, 00:19:33.036 "unmap": true, 00:19:33.036 "flush": true, 00:19:33.036 "reset": true, 00:19:33.036 "nvme_admin": false, 00:19:33.036 "nvme_io": false, 00:19:33.036 "nvme_io_md": false, 00:19:33.036 "write_zeroes": true, 00:19:33.036 "zcopy": true, 00:19:33.036 "get_zone_info": false, 00:19:33.036 "zone_management": false, 00:19:33.036 "zone_append": false, 00:19:33.037 "compare": false, 00:19:33.037 "compare_and_write": false, 00:19:33.037 "abort": true, 00:19:33.037 "seek_hole": false, 00:19:33.037 "seek_data": false, 00:19:33.037 "copy": true, 00:19:33.037 "nvme_iov_md": false 00:19:33.037 }, 00:19:33.037 "memory_domains": [ 00:19:33.037 { 00:19:33.037 "dma_device_id": "system", 00:19:33.037 "dma_device_type": 1 00:19:33.037 }, 00:19:33.037 { 00:19:33.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.037 "dma_device_type": 2 00:19:33.037 } 00:19:33.037 ], 00:19:33.037 "driver_specific": {} 00:19:33.037 }' 00:19:33.037 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.327 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.327 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:33.327 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.327 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.327 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:33.327 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.327 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.327 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:33.327 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.621 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.621 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:33.621 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:33.621 [2024-07-25 11:01:40.673006] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:33.621 [2024-07-25 11:01:40.673042] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.621 [2024-07-25 11:01:40.673136] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.621 [2024-07-25 11:01:40.673213] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.621 [2024-07-25 11:01:40.673236] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:33.621 11:01:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3604036 00:19:33.621 11:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3604036 ']' 00:19:33.621 11:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3604036 00:19:33.621 11:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:19:33.621 11:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.621 11:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3604036 00:19:33.880 11:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:33.880 11:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:33.880 11:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3604036' 00:19:33.880 killing process with pid 3604036 00:19:33.880 11:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3604036 00:19:33.880 [2024-07-25 11:01:40.749739] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:33.880 11:01:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3604036 00:19:34.138 [2024-07-25 11:01:41.051394] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:36.041 11:01:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:36.041 00:19:36.041 real 0m29.636s 00:19:36.041 user 0m51.773s 00:19:36.041 sys 0m5.149s 00:19:36.041 11:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:36.041 11:01:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:36.041 ************************************ 00:19:36.041 END TEST raid_state_function_test_sb 00:19:36.041 ************************************ 00:19:36.041 11:01:42 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:19:36.041 11:01:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:36.041 11:01:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:36.041 11:01:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:36.041 ************************************ 00:19:36.041 START TEST raid_superblock_test 00:19:36.041 ************************************ 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:19:36.041 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:19:36.042 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:19:36.042 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:19:36.042 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:19:36.042 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3609506 00:19:36.042 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3609506 /var/tmp/spdk-raid.sock 00:19:36.042 11:01:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:36.042 11:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3609506 ']' 00:19:36.042 11:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:36.042 11:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.042 11:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:36.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:36.042 11:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.042 11:01:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.042 [2024-07-25 11:01:42.965921] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:36.042 [2024-07-25 11:01:42.966013] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3609506 ] 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:36.042 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:36.042 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:36.299 [2024-07-25 11:01:43.165119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.556 [2024-07-25 11:01:43.449394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.814 [2024-07-25 11:01:43.805805] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:36.814 [2024-07-25 11:01:43.805842] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:37.073 11:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.073 11:01:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:19:37.073 11:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:19:37.073 11:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:37.073 11:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:19:37.073 11:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:19:37.073 11:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:37.073 11:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:37.073 11:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:37.073 11:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:37.073 11:01:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:37.330 malloc1 00:19:37.330 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:37.589 [2024-07-25 11:01:44.476195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:37.589 [2024-07-25 11:01:44.476257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.589 [2024-07-25 11:01:44.476287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:19:37.589 [2024-07-25 11:01:44.476304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.589 [2024-07-25 11:01:44.479009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.589 [2024-07-25 11:01:44.479042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:37.589 pt1 00:19:37.589 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:37.589 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:37.589 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:19:37.589 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:19:37.589 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:37.589 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:37.589 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:37.589 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:37.589 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:37.850 malloc2 00:19:37.850 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:38.110 [2024-07-25 11:01:44.975984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:38.110 [2024-07-25 11:01:44.976048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.110 [2024-07-25 11:01:44.976075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:19:38.110 [2024-07-25 11:01:44.976090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.110 [2024-07-25 11:01:44.978836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.110 [2024-07-25 11:01:44.978878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:38.110 pt2 00:19:38.110 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:38.110 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:38.110 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:19:38.110 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:19:38.110 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:38.110 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:38.110 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:19:38.110 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:38.110 11:01:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:38.368 malloc3 00:19:38.368 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:38.368 [2024-07-25 11:01:45.472017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:38.368 [2024-07-25 11:01:45.472073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.368 [2024-07-25 11:01:45.472102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:19:38.368 [2024-07-25 11:01:45.472118] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.368 [2024-07-25 11:01:45.474804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.368 [2024-07-25 11:01:45.474835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:38.368 pt3 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:38.625 [2024-07-25 11:01:45.688642] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:38.625 [2024-07-25 11:01:45.690957] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:38.625 [2024-07-25 11:01:45.691039] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:38.625 [2024-07-25 11:01:45.691259] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:38.625 [2024-07-25 11:01:45.691280] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:38.625 [2024-07-25 11:01:45.691611] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:19:38.625 [2024-07-25 11:01:45.691862] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:38.625 [2024-07-25 11:01:45.691880] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:38.625 [2024-07-25 11:01:45.692101] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.625 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.883 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.883 "name": "raid_bdev1", 00:19:38.883 "uuid": "9494f477-28ae-4eb9-88e2-b4541c013b90", 00:19:38.883 "strip_size_kb": 64, 00:19:38.883 "state": "online", 00:19:38.883 "raid_level": "concat", 00:19:38.883 "superblock": true, 00:19:38.883 "num_base_bdevs": 3, 00:19:38.883 "num_base_bdevs_discovered": 3, 00:19:38.883 "num_base_bdevs_operational": 3, 00:19:38.883 "base_bdevs_list": [ 00:19:38.883 { 00:19:38.883 "name": "pt1", 00:19:38.883 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:38.883 "is_configured": true, 00:19:38.883 "data_offset": 2048, 00:19:38.883 "data_size": 63488 00:19:38.883 }, 00:19:38.883 { 00:19:38.883 "name": "pt2", 00:19:38.883 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:38.883 "is_configured": true, 00:19:38.883 "data_offset": 2048, 00:19:38.883 "data_size": 63488 00:19:38.883 }, 00:19:38.883 { 00:19:38.883 "name": "pt3", 00:19:38.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:38.883 "is_configured": true, 00:19:38.883 "data_offset": 2048, 00:19:38.883 "data_size": 63488 00:19:38.883 } 00:19:38.883 ] 00:19:38.883 }' 00:19:38.883 11:01:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.883 11:01:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.451 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:19:39.451 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:39.451 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:39.451 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:39.451 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:39.451 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:39.451 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:39.451 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:39.711 [2024-07-25 11:01:46.723779] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:39.711 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:39.711 "name": "raid_bdev1", 00:19:39.711 "aliases": [ 00:19:39.711 "9494f477-28ae-4eb9-88e2-b4541c013b90" 00:19:39.711 ], 00:19:39.711 "product_name": "Raid Volume", 00:19:39.711 "block_size": 512, 00:19:39.711 "num_blocks": 190464, 00:19:39.711 "uuid": "9494f477-28ae-4eb9-88e2-b4541c013b90", 00:19:39.711 "assigned_rate_limits": { 00:19:39.711 "rw_ios_per_sec": 0, 00:19:39.711 "rw_mbytes_per_sec": 0, 00:19:39.711 "r_mbytes_per_sec": 0, 00:19:39.711 "w_mbytes_per_sec": 0 00:19:39.711 }, 00:19:39.711 "claimed": false, 00:19:39.711 "zoned": false, 00:19:39.711 "supported_io_types": { 00:19:39.711 "read": true, 00:19:39.711 "write": true, 00:19:39.711 "unmap": true, 00:19:39.711 "flush": true, 00:19:39.711 "reset": true, 00:19:39.711 "nvme_admin": false, 00:19:39.711 "nvme_io": false, 00:19:39.711 "nvme_io_md": false, 00:19:39.711 "write_zeroes": true, 00:19:39.711 "zcopy": false, 00:19:39.711 "get_zone_info": false, 00:19:39.711 "zone_management": false, 00:19:39.711 "zone_append": false, 00:19:39.711 "compare": false, 00:19:39.711 "compare_and_write": false, 00:19:39.711 "abort": false, 00:19:39.711 "seek_hole": false, 00:19:39.711 "seek_data": false, 00:19:39.711 "copy": false, 00:19:39.711 "nvme_iov_md": false 00:19:39.711 }, 00:19:39.711 "memory_domains": [ 00:19:39.711 { 00:19:39.711 "dma_device_id": "system", 00:19:39.711 "dma_device_type": 1 00:19:39.711 }, 00:19:39.711 { 00:19:39.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.711 "dma_device_type": 2 00:19:39.711 }, 00:19:39.711 { 00:19:39.711 "dma_device_id": "system", 00:19:39.711 "dma_device_type": 1 00:19:39.711 }, 00:19:39.711 { 00:19:39.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.711 "dma_device_type": 2 00:19:39.711 }, 00:19:39.711 { 00:19:39.711 "dma_device_id": "system", 00:19:39.711 "dma_device_type": 1 00:19:39.711 }, 00:19:39.711 { 00:19:39.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.711 "dma_device_type": 2 00:19:39.711 } 00:19:39.711 ], 00:19:39.711 "driver_specific": { 00:19:39.711 "raid": { 00:19:39.711 "uuid": "9494f477-28ae-4eb9-88e2-b4541c013b90", 00:19:39.711 "strip_size_kb": 64, 00:19:39.711 "state": "online", 00:19:39.711 "raid_level": "concat", 00:19:39.711 "superblock": true, 00:19:39.711 "num_base_bdevs": 3, 00:19:39.711 "num_base_bdevs_discovered": 3, 00:19:39.711 "num_base_bdevs_operational": 3, 00:19:39.711 "base_bdevs_list": [ 00:19:39.711 { 00:19:39.711 "name": "pt1", 00:19:39.711 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.711 "is_configured": true, 00:19:39.711 "data_offset": 2048, 00:19:39.711 "data_size": 63488 00:19:39.711 }, 00:19:39.711 { 00:19:39.711 "name": "pt2", 00:19:39.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:39.711 "is_configured": true, 00:19:39.711 "data_offset": 2048, 00:19:39.711 "data_size": 63488 00:19:39.711 }, 00:19:39.711 { 00:19:39.711 "name": "pt3", 00:19:39.711 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:39.711 "is_configured": true, 00:19:39.711 "data_offset": 2048, 00:19:39.711 "data_size": 63488 00:19:39.711 } 00:19:39.711 ] 00:19:39.711 } 00:19:39.711 } 00:19:39.711 }' 00:19:39.711 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:39.711 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:39.711 pt2 00:19:39.711 pt3' 00:19:39.711 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:39.711 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:39.711 11:01:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:39.969 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:39.969 "name": "pt1", 00:19:39.969 "aliases": [ 00:19:39.969 "00000000-0000-0000-0000-000000000001" 00:19:39.969 ], 00:19:39.969 "product_name": "passthru", 00:19:39.969 "block_size": 512, 00:19:39.969 "num_blocks": 65536, 00:19:39.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:39.969 "assigned_rate_limits": { 00:19:39.969 "rw_ios_per_sec": 0, 00:19:39.969 "rw_mbytes_per_sec": 0, 00:19:39.969 "r_mbytes_per_sec": 0, 00:19:39.969 "w_mbytes_per_sec": 0 00:19:39.969 }, 00:19:39.969 "claimed": true, 00:19:39.969 "claim_type": "exclusive_write", 00:19:39.969 "zoned": false, 00:19:39.969 "supported_io_types": { 00:19:39.969 "read": true, 00:19:39.969 "write": true, 00:19:39.969 "unmap": true, 00:19:39.969 "flush": true, 00:19:39.969 "reset": true, 00:19:39.969 "nvme_admin": false, 00:19:39.969 "nvme_io": false, 00:19:39.969 "nvme_io_md": false, 00:19:39.969 "write_zeroes": true, 00:19:39.969 "zcopy": true, 00:19:39.969 "get_zone_info": false, 00:19:39.969 "zone_management": false, 00:19:39.969 "zone_append": false, 00:19:39.969 "compare": false, 00:19:39.969 "compare_and_write": false, 00:19:39.969 "abort": true, 00:19:39.969 "seek_hole": false, 00:19:39.969 "seek_data": false, 00:19:39.969 "copy": true, 00:19:39.969 "nvme_iov_md": false 00:19:39.969 }, 00:19:39.969 "memory_domains": [ 00:19:39.969 { 00:19:39.969 "dma_device_id": "system", 00:19:39.969 "dma_device_type": 1 00:19:39.969 }, 00:19:39.969 { 00:19:39.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:39.969 "dma_device_type": 2 00:19:39.969 } 00:19:39.969 ], 00:19:39.969 "driver_specific": { 00:19:39.969 "passthru": { 00:19:39.969 "name": "pt1", 00:19:39.969 "base_bdev_name": "malloc1" 00:19:39.969 } 00:19:39.969 } 00:19:39.969 }' 00:19:39.969 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:39.969 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.227 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:40.227 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.227 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.227 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:40.227 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.227 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.227 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:40.227 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.227 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:40.486 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:40.486 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:40.486 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:40.486 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:40.486 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:40.486 "name": "pt2", 00:19:40.486 "aliases": [ 00:19:40.486 "00000000-0000-0000-0000-000000000002" 00:19:40.486 ], 00:19:40.486 "product_name": "passthru", 00:19:40.486 "block_size": 512, 00:19:40.486 "num_blocks": 65536, 00:19:40.486 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.486 "assigned_rate_limits": { 00:19:40.486 "rw_ios_per_sec": 0, 00:19:40.486 "rw_mbytes_per_sec": 0, 00:19:40.486 "r_mbytes_per_sec": 0, 00:19:40.486 "w_mbytes_per_sec": 0 00:19:40.486 }, 00:19:40.486 "claimed": true, 00:19:40.486 "claim_type": "exclusive_write", 00:19:40.486 "zoned": false, 00:19:40.486 "supported_io_types": { 00:19:40.486 "read": true, 00:19:40.486 "write": true, 00:19:40.486 "unmap": true, 00:19:40.486 "flush": true, 00:19:40.486 "reset": true, 00:19:40.486 "nvme_admin": false, 00:19:40.486 "nvme_io": false, 00:19:40.486 "nvme_io_md": false, 00:19:40.486 "write_zeroes": true, 00:19:40.486 "zcopy": true, 00:19:40.486 "get_zone_info": false, 00:19:40.486 "zone_management": false, 00:19:40.486 "zone_append": false, 00:19:40.486 "compare": false, 00:19:40.486 "compare_and_write": false, 00:19:40.486 "abort": true, 00:19:40.486 "seek_hole": false, 00:19:40.486 "seek_data": false, 00:19:40.486 "copy": true, 00:19:40.486 "nvme_iov_md": false 00:19:40.486 }, 00:19:40.486 "memory_domains": [ 00:19:40.486 { 00:19:40.486 "dma_device_id": "system", 00:19:40.486 "dma_device_type": 1 00:19:40.486 }, 00:19:40.486 { 00:19:40.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:40.486 "dma_device_type": 2 00:19:40.486 } 00:19:40.486 ], 00:19:40.487 "driver_specific": { 00:19:40.487 "passthru": { 00:19:40.487 "name": "pt2", 00:19:40.487 "base_bdev_name": "malloc2" 00:19:40.487 } 00:19:40.487 } 00:19:40.487 }' 00:19:40.487 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.744 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:40.744 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:40.744 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.744 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:40.744 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:40.744 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.744 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:40.744 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:40.744 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.002 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.002 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:41.002 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:41.002 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:41.002 11:01:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:41.260 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:41.260 "name": "pt3", 00:19:41.260 "aliases": [ 00:19:41.260 "00000000-0000-0000-0000-000000000003" 00:19:41.260 ], 00:19:41.260 "product_name": "passthru", 00:19:41.260 "block_size": 512, 00:19:41.260 "num_blocks": 65536, 00:19:41.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:41.260 "assigned_rate_limits": { 00:19:41.260 "rw_ios_per_sec": 0, 00:19:41.260 "rw_mbytes_per_sec": 0, 00:19:41.260 "r_mbytes_per_sec": 0, 00:19:41.260 "w_mbytes_per_sec": 0 00:19:41.260 }, 00:19:41.260 "claimed": true, 00:19:41.260 "claim_type": "exclusive_write", 00:19:41.260 "zoned": false, 00:19:41.260 "supported_io_types": { 00:19:41.260 "read": true, 00:19:41.260 "write": true, 00:19:41.260 "unmap": true, 00:19:41.260 "flush": true, 00:19:41.260 "reset": true, 00:19:41.260 "nvme_admin": false, 00:19:41.260 "nvme_io": false, 00:19:41.260 "nvme_io_md": false, 00:19:41.260 "write_zeroes": true, 00:19:41.260 "zcopy": true, 00:19:41.260 "get_zone_info": false, 00:19:41.260 "zone_management": false, 00:19:41.260 "zone_append": false, 00:19:41.260 "compare": false, 00:19:41.260 "compare_and_write": false, 00:19:41.260 "abort": true, 00:19:41.260 "seek_hole": false, 00:19:41.260 "seek_data": false, 00:19:41.260 "copy": true, 00:19:41.260 "nvme_iov_md": false 00:19:41.260 }, 00:19:41.260 "memory_domains": [ 00:19:41.260 { 00:19:41.260 "dma_device_id": "system", 00:19:41.260 "dma_device_type": 1 00:19:41.260 }, 00:19:41.260 { 00:19:41.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.260 "dma_device_type": 2 00:19:41.260 } 00:19:41.260 ], 00:19:41.260 "driver_specific": { 00:19:41.260 "passthru": { 00:19:41.260 "name": "pt3", 00:19:41.260 "base_bdev_name": "malloc3" 00:19:41.260 } 00:19:41.260 } 00:19:41.260 }' 00:19:41.260 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:41.260 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:41.260 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:41.260 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:41.260 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:41.260 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:41.260 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:41.260 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:41.520 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:41.520 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.520 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:41.520 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:41.520 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:41.520 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:19:41.778 [2024-07-25 11:01:48.693120] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.778 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=9494f477-28ae-4eb9-88e2-b4541c013b90 00:19:41.778 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 9494f477-28ae-4eb9-88e2-b4541c013b90 ']' 00:19:41.778 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:42.039 [2024-07-25 11:01:48.921342] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:42.039 [2024-07-25 11:01:48.921381] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:42.040 [2024-07-25 11:01:48.921476] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.040 [2024-07-25 11:01:48.921556] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.040 [2024-07-25 11:01:48.921573] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:42.040 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.040 11:01:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:19:42.300 11:01:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:19:42.300 11:01:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:19:42.300 11:01:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:42.300 11:01:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:42.300 11:01:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:42.300 11:01:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:42.558 11:01:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:19:42.558 11:01:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:42.817 11:01:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:42.817 11:01:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:19:43.076 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:43.334 [2024-07-25 11:01:50.268931] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:43.334 [2024-07-25 11:01:50.271288] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:43.334 [2024-07-25 11:01:50.271356] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:43.334 [2024-07-25 11:01:50.271417] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:43.334 [2024-07-25 11:01:50.271472] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:43.334 [2024-07-25 11:01:50.271501] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:43.334 [2024-07-25 11:01:50.271526] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:43.334 [2024-07-25 11:01:50.271540] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:43.334 request: 00:19:43.334 { 00:19:43.334 "name": "raid_bdev1", 00:19:43.334 "raid_level": "concat", 00:19:43.334 "base_bdevs": [ 00:19:43.334 "malloc1", 00:19:43.334 "malloc2", 00:19:43.334 "malloc3" 00:19:43.334 ], 00:19:43.334 "strip_size_kb": 64, 00:19:43.334 "superblock": false, 00:19:43.334 "method": "bdev_raid_create", 00:19:43.334 "req_id": 1 00:19:43.334 } 00:19:43.334 Got JSON-RPC error response 00:19:43.334 response: 00:19:43.334 { 00:19:43.334 "code": -17, 00:19:43.334 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:43.334 } 00:19:43.334 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:19:43.334 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.334 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.334 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.334 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.334 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:19:43.592 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:19:43.592 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:19:43.592 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:43.850 [2024-07-25 11:01:50.714054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:43.850 [2024-07-25 11:01:50.714116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.850 [2024-07-25 11:01:50.714155] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:19:43.850 [2024-07-25 11:01:50.714172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.850 [2024-07-25 11:01:50.716973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.850 [2024-07-25 11:01:50.717007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:43.850 [2024-07-25 11:01:50.717109] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:43.850 [2024-07-25 11:01:50.717205] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:43.850 pt1 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:43.850 "name": "raid_bdev1", 00:19:43.850 "uuid": "9494f477-28ae-4eb9-88e2-b4541c013b90", 00:19:43.850 "strip_size_kb": 64, 00:19:43.850 "state": "configuring", 00:19:43.850 "raid_level": "concat", 00:19:43.850 "superblock": true, 00:19:43.850 "num_base_bdevs": 3, 00:19:43.850 "num_base_bdevs_discovered": 1, 00:19:43.850 "num_base_bdevs_operational": 3, 00:19:43.850 "base_bdevs_list": [ 00:19:43.850 { 00:19:43.850 "name": "pt1", 00:19:43.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:43.850 "is_configured": true, 00:19:43.850 "data_offset": 2048, 00:19:43.850 "data_size": 63488 00:19:43.850 }, 00:19:43.850 { 00:19:43.850 "name": null, 00:19:43.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.850 "is_configured": false, 00:19:43.850 "data_offset": 2048, 00:19:43.850 "data_size": 63488 00:19:43.850 }, 00:19:43.850 { 00:19:43.850 "name": null, 00:19:43.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:43.850 "is_configured": false, 00:19:43.850 "data_offset": 2048, 00:19:43.850 "data_size": 63488 00:19:43.850 } 00:19:43.850 ] 00:19:43.850 }' 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:43.850 11:01:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:44.782 11:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:19:44.782 11:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:44.782 [2024-07-25 11:01:51.768901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:44.782 [2024-07-25 11:01:51.768984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.783 [2024-07-25 11:01:51.769014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:19:44.783 [2024-07-25 11:01:51.769031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.783 [2024-07-25 11:01:51.769646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.783 [2024-07-25 11:01:51.769672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:44.783 [2024-07-25 11:01:51.769766] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:44.783 [2024-07-25 11:01:51.769799] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:44.783 pt2 00:19:44.783 11:01:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:45.040 [2024-07-25 11:01:51.997700] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:45.040 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:45.040 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:45.040 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:45.040 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:45.040 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:45.040 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:45.040 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.040 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.040 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.040 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.040 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.040 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.297 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.297 "name": "raid_bdev1", 00:19:45.297 "uuid": "9494f477-28ae-4eb9-88e2-b4541c013b90", 00:19:45.297 "strip_size_kb": 64, 00:19:45.297 "state": "configuring", 00:19:45.297 "raid_level": "concat", 00:19:45.297 "superblock": true, 00:19:45.297 "num_base_bdevs": 3, 00:19:45.297 "num_base_bdevs_discovered": 1, 00:19:45.297 "num_base_bdevs_operational": 3, 00:19:45.297 "base_bdevs_list": [ 00:19:45.297 { 00:19:45.297 "name": "pt1", 00:19:45.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:45.297 "is_configured": true, 00:19:45.298 "data_offset": 2048, 00:19:45.298 "data_size": 63488 00:19:45.298 }, 00:19:45.298 { 00:19:45.298 "name": null, 00:19:45.298 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:45.298 "is_configured": false, 00:19:45.298 "data_offset": 2048, 00:19:45.298 "data_size": 63488 00:19:45.298 }, 00:19:45.298 { 00:19:45.298 "name": null, 00:19:45.298 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:45.298 "is_configured": false, 00:19:45.298 "data_offset": 2048, 00:19:45.298 "data_size": 63488 00:19:45.298 } 00:19:45.298 ] 00:19:45.298 }' 00:19:45.298 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.298 11:01:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.864 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:19:45.864 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:45.864 11:01:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:46.123 [2024-07-25 11:01:53.024476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:46.123 [2024-07-25 11:01:53.024559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.123 [2024-07-25 11:01:53.024585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042380 00:19:46.123 [2024-07-25 11:01:53.024604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.123 [2024-07-25 11:01:53.025203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.123 [2024-07-25 11:01:53.025234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:46.123 [2024-07-25 11:01:53.025335] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:46.123 [2024-07-25 11:01:53.025366] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:46.123 pt2 00:19:46.123 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:46.123 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:46.123 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:46.381 [2024-07-25 11:01:53.253049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:46.381 [2024-07-25 11:01:53.253107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.381 [2024-07-25 11:01:53.253130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:19:46.381 [2024-07-25 11:01:53.253154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.381 [2024-07-25 11:01:53.253711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.381 [2024-07-25 11:01:53.253738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:46.381 [2024-07-25 11:01:53.253833] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:46.381 [2024-07-25 11:01:53.253866] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:46.381 [2024-07-25 11:01:53.254058] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:46.381 [2024-07-25 11:01:53.254076] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:46.381 [2024-07-25 11:01:53.254400] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:19:46.381 [2024-07-25 11:01:53.254642] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:46.381 [2024-07-25 11:01:53.254657] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:46.381 [2024-07-25 11:01:53.254840] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.381 pt3 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.381 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.639 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.639 "name": "raid_bdev1", 00:19:46.639 "uuid": "9494f477-28ae-4eb9-88e2-b4541c013b90", 00:19:46.639 "strip_size_kb": 64, 00:19:46.639 "state": "online", 00:19:46.639 "raid_level": "concat", 00:19:46.639 "superblock": true, 00:19:46.639 "num_base_bdevs": 3, 00:19:46.639 "num_base_bdevs_discovered": 3, 00:19:46.639 "num_base_bdevs_operational": 3, 00:19:46.639 "base_bdevs_list": [ 00:19:46.639 { 00:19:46.639 "name": "pt1", 00:19:46.639 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:46.639 "is_configured": true, 00:19:46.639 "data_offset": 2048, 00:19:46.639 "data_size": 63488 00:19:46.639 }, 00:19:46.639 { 00:19:46.639 "name": "pt2", 00:19:46.639 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:46.639 "is_configured": true, 00:19:46.639 "data_offset": 2048, 00:19:46.639 "data_size": 63488 00:19:46.639 }, 00:19:46.639 { 00:19:46.639 "name": "pt3", 00:19:46.639 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:46.639 "is_configured": true, 00:19:46.639 "data_offset": 2048, 00:19:46.639 "data_size": 63488 00:19:46.639 } 00:19:46.639 ] 00:19:46.639 }' 00:19:46.639 11:01:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.639 11:01:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:47.236 [2024-07-25 11:01:54.276234] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:47.236 "name": "raid_bdev1", 00:19:47.236 "aliases": [ 00:19:47.236 "9494f477-28ae-4eb9-88e2-b4541c013b90" 00:19:47.236 ], 00:19:47.236 "product_name": "Raid Volume", 00:19:47.236 "block_size": 512, 00:19:47.236 "num_blocks": 190464, 00:19:47.236 "uuid": "9494f477-28ae-4eb9-88e2-b4541c013b90", 00:19:47.236 "assigned_rate_limits": { 00:19:47.236 "rw_ios_per_sec": 0, 00:19:47.236 "rw_mbytes_per_sec": 0, 00:19:47.236 "r_mbytes_per_sec": 0, 00:19:47.236 "w_mbytes_per_sec": 0 00:19:47.236 }, 00:19:47.236 "claimed": false, 00:19:47.236 "zoned": false, 00:19:47.236 "supported_io_types": { 00:19:47.236 "read": true, 00:19:47.236 "write": true, 00:19:47.236 "unmap": true, 00:19:47.236 "flush": true, 00:19:47.236 "reset": true, 00:19:47.236 "nvme_admin": false, 00:19:47.236 "nvme_io": false, 00:19:47.236 "nvme_io_md": false, 00:19:47.236 "write_zeroes": true, 00:19:47.236 "zcopy": false, 00:19:47.236 "get_zone_info": false, 00:19:47.236 "zone_management": false, 00:19:47.236 "zone_append": false, 00:19:47.236 "compare": false, 00:19:47.236 "compare_and_write": false, 00:19:47.236 "abort": false, 00:19:47.236 "seek_hole": false, 00:19:47.236 "seek_data": false, 00:19:47.236 "copy": false, 00:19:47.236 "nvme_iov_md": false 00:19:47.236 }, 00:19:47.236 "memory_domains": [ 00:19:47.236 { 00:19:47.236 "dma_device_id": "system", 00:19:47.236 "dma_device_type": 1 00:19:47.236 }, 00:19:47.236 { 00:19:47.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.236 "dma_device_type": 2 00:19:47.236 }, 00:19:47.236 { 00:19:47.236 "dma_device_id": "system", 00:19:47.236 "dma_device_type": 1 00:19:47.236 }, 00:19:47.236 { 00:19:47.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.236 "dma_device_type": 2 00:19:47.236 }, 00:19:47.236 { 00:19:47.236 "dma_device_id": "system", 00:19:47.236 "dma_device_type": 1 00:19:47.236 }, 00:19:47.236 { 00:19:47.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.236 "dma_device_type": 2 00:19:47.236 } 00:19:47.236 ], 00:19:47.236 "driver_specific": { 00:19:47.236 "raid": { 00:19:47.236 "uuid": "9494f477-28ae-4eb9-88e2-b4541c013b90", 00:19:47.236 "strip_size_kb": 64, 00:19:47.236 "state": "online", 00:19:47.236 "raid_level": "concat", 00:19:47.236 "superblock": true, 00:19:47.236 "num_base_bdevs": 3, 00:19:47.236 "num_base_bdevs_discovered": 3, 00:19:47.236 "num_base_bdevs_operational": 3, 00:19:47.236 "base_bdevs_list": [ 00:19:47.236 { 00:19:47.236 "name": "pt1", 00:19:47.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:47.236 "is_configured": true, 00:19:47.236 "data_offset": 2048, 00:19:47.236 "data_size": 63488 00:19:47.236 }, 00:19:47.236 { 00:19:47.236 "name": "pt2", 00:19:47.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:47.236 "is_configured": true, 00:19:47.236 "data_offset": 2048, 00:19:47.236 "data_size": 63488 00:19:47.236 }, 00:19:47.236 { 00:19:47.236 "name": "pt3", 00:19:47.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:47.236 "is_configured": true, 00:19:47.236 "data_offset": 2048, 00:19:47.236 "data_size": 63488 00:19:47.236 } 00:19:47.236 ] 00:19:47.236 } 00:19:47.236 } 00:19:47.236 }' 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:47.236 pt2 00:19:47.236 pt3' 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:47.236 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:47.495 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:47.495 "name": "pt1", 00:19:47.495 "aliases": [ 00:19:47.495 "00000000-0000-0000-0000-000000000001" 00:19:47.495 ], 00:19:47.495 "product_name": "passthru", 00:19:47.495 "block_size": 512, 00:19:47.495 "num_blocks": 65536, 00:19:47.495 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:47.495 "assigned_rate_limits": { 00:19:47.495 "rw_ios_per_sec": 0, 00:19:47.495 "rw_mbytes_per_sec": 0, 00:19:47.495 "r_mbytes_per_sec": 0, 00:19:47.495 "w_mbytes_per_sec": 0 00:19:47.495 }, 00:19:47.495 "claimed": true, 00:19:47.495 "claim_type": "exclusive_write", 00:19:47.495 "zoned": false, 00:19:47.495 "supported_io_types": { 00:19:47.495 "read": true, 00:19:47.495 "write": true, 00:19:47.495 "unmap": true, 00:19:47.495 "flush": true, 00:19:47.495 "reset": true, 00:19:47.495 "nvme_admin": false, 00:19:47.495 "nvme_io": false, 00:19:47.495 "nvme_io_md": false, 00:19:47.495 "write_zeroes": true, 00:19:47.495 "zcopy": true, 00:19:47.495 "get_zone_info": false, 00:19:47.495 "zone_management": false, 00:19:47.495 "zone_append": false, 00:19:47.495 "compare": false, 00:19:47.495 "compare_and_write": false, 00:19:47.495 "abort": true, 00:19:47.495 "seek_hole": false, 00:19:47.495 "seek_data": false, 00:19:47.495 "copy": true, 00:19:47.495 "nvme_iov_md": false 00:19:47.495 }, 00:19:47.495 "memory_domains": [ 00:19:47.495 { 00:19:47.495 "dma_device_id": "system", 00:19:47.495 "dma_device_type": 1 00:19:47.495 }, 00:19:47.495 { 00:19:47.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:47.495 "dma_device_type": 2 00:19:47.495 } 00:19:47.495 ], 00:19:47.495 "driver_specific": { 00:19:47.495 "passthru": { 00:19:47.495 "name": "pt1", 00:19:47.495 "base_bdev_name": "malloc1" 00:19:47.495 } 00:19:47.495 } 00:19:47.495 }' 00:19:47.495 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:47.753 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:47.753 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:47.753 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:47.753 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:47.753 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:47.753 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:47.753 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:47.753 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:47.753 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:48.021 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:48.021 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:48.021 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:48.021 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:48.021 11:01:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:48.281 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:48.281 "name": "pt2", 00:19:48.281 "aliases": [ 00:19:48.281 "00000000-0000-0000-0000-000000000002" 00:19:48.281 ], 00:19:48.281 "product_name": "passthru", 00:19:48.281 "block_size": 512, 00:19:48.281 "num_blocks": 65536, 00:19:48.281 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:48.281 "assigned_rate_limits": { 00:19:48.281 "rw_ios_per_sec": 0, 00:19:48.281 "rw_mbytes_per_sec": 0, 00:19:48.281 "r_mbytes_per_sec": 0, 00:19:48.281 "w_mbytes_per_sec": 0 00:19:48.281 }, 00:19:48.281 "claimed": true, 00:19:48.281 "claim_type": "exclusive_write", 00:19:48.281 "zoned": false, 00:19:48.281 "supported_io_types": { 00:19:48.281 "read": true, 00:19:48.281 "write": true, 00:19:48.281 "unmap": true, 00:19:48.281 "flush": true, 00:19:48.281 "reset": true, 00:19:48.281 "nvme_admin": false, 00:19:48.281 "nvme_io": false, 00:19:48.281 "nvme_io_md": false, 00:19:48.281 "write_zeroes": true, 00:19:48.281 "zcopy": true, 00:19:48.281 "get_zone_info": false, 00:19:48.281 "zone_management": false, 00:19:48.281 "zone_append": false, 00:19:48.281 "compare": false, 00:19:48.281 "compare_and_write": false, 00:19:48.281 "abort": true, 00:19:48.281 "seek_hole": false, 00:19:48.281 "seek_data": false, 00:19:48.281 "copy": true, 00:19:48.281 "nvme_iov_md": false 00:19:48.281 }, 00:19:48.281 "memory_domains": [ 00:19:48.281 { 00:19:48.281 "dma_device_id": "system", 00:19:48.281 "dma_device_type": 1 00:19:48.281 }, 00:19:48.281 { 00:19:48.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.281 "dma_device_type": 2 00:19:48.282 } 00:19:48.282 ], 00:19:48.282 "driver_specific": { 00:19:48.282 "passthru": { 00:19:48.282 "name": "pt2", 00:19:48.282 "base_bdev_name": "malloc2" 00:19:48.282 } 00:19:48.282 } 00:19:48.282 }' 00:19:48.282 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:48.282 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:48.282 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:48.282 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:48.282 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:48.282 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:48.282 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:48.282 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:48.540 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:48.540 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:48.540 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:48.540 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:48.540 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:48.540 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:48.540 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:48.799 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:48.799 "name": "pt3", 00:19:48.799 "aliases": [ 00:19:48.799 "00000000-0000-0000-0000-000000000003" 00:19:48.799 ], 00:19:48.799 "product_name": "passthru", 00:19:48.799 "block_size": 512, 00:19:48.799 "num_blocks": 65536, 00:19:48.799 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:48.799 "assigned_rate_limits": { 00:19:48.799 "rw_ios_per_sec": 0, 00:19:48.799 "rw_mbytes_per_sec": 0, 00:19:48.799 "r_mbytes_per_sec": 0, 00:19:48.799 "w_mbytes_per_sec": 0 00:19:48.799 }, 00:19:48.799 "claimed": true, 00:19:48.799 "claim_type": "exclusive_write", 00:19:48.799 "zoned": false, 00:19:48.799 "supported_io_types": { 00:19:48.799 "read": true, 00:19:48.799 "write": true, 00:19:48.799 "unmap": true, 00:19:48.799 "flush": true, 00:19:48.799 "reset": true, 00:19:48.799 "nvme_admin": false, 00:19:48.799 "nvme_io": false, 00:19:48.799 "nvme_io_md": false, 00:19:48.799 "write_zeroes": true, 00:19:48.799 "zcopy": true, 00:19:48.799 "get_zone_info": false, 00:19:48.799 "zone_management": false, 00:19:48.799 "zone_append": false, 00:19:48.799 "compare": false, 00:19:48.799 "compare_and_write": false, 00:19:48.799 "abort": true, 00:19:48.799 "seek_hole": false, 00:19:48.799 "seek_data": false, 00:19:48.799 "copy": true, 00:19:48.799 "nvme_iov_md": false 00:19:48.799 }, 00:19:48.799 "memory_domains": [ 00:19:48.799 { 00:19:48.799 "dma_device_id": "system", 00:19:48.799 "dma_device_type": 1 00:19:48.799 }, 00:19:48.799 { 00:19:48.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.799 "dma_device_type": 2 00:19:48.799 } 00:19:48.799 ], 00:19:48.799 "driver_specific": { 00:19:48.799 "passthru": { 00:19:48.799 "name": "pt3", 00:19:48.799 "base_bdev_name": "malloc3" 00:19:48.799 } 00:19:48.799 } 00:19:48.799 }' 00:19:48.799 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:48.799 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:48.799 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:48.799 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:48.799 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:48.799 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:48.799 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:49.058 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:49.058 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:49.058 11:01:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:49.058 11:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:49.058 11:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:49.058 11:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:49.058 11:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:19:49.316 [2024-07-25 11:01:56.281675] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:49.316 11:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 9494f477-28ae-4eb9-88e2-b4541c013b90 '!=' 9494f477-28ae-4eb9-88e2-b4541c013b90 ']' 00:19:49.316 11:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:19:49.316 11:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:49.316 11:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:49.316 11:01:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3609506 00:19:49.316 11:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3609506 ']' 00:19:49.316 11:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3609506 00:19:49.316 11:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:19:49.316 11:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:49.316 11:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3609506 00:19:49.316 11:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:49.316 11:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:49.317 11:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3609506' 00:19:49.317 killing process with pid 3609506 00:19:49.317 11:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3609506 00:19:49.317 [2024-07-25 11:01:56.353789] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:49.317 [2024-07-25 11:01:56.353898] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:49.317 11:01:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3609506 00:19:49.317 [2024-07-25 11:01:56.353974] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:49.317 [2024-07-25 11:01:56.353994] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:49.577 [2024-07-25 11:01:56.681057] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:51.478 11:01:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:19:51.478 00:19:51.478 real 0m15.455s 00:19:51.478 user 0m25.991s 00:19:51.478 sys 0m2.629s 00:19:51.478 11:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:51.478 11:01:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.478 ************************************ 00:19:51.478 END TEST raid_superblock_test 00:19:51.478 ************************************ 00:19:51.478 11:01:58 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:19:51.478 11:01:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:51.478 11:01:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:51.478 11:01:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:51.478 ************************************ 00:19:51.478 START TEST raid_read_error_test 00:19:51.478 ************************************ 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.UDgQLkiM1e 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3612424 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3612424 /var/tmp/spdk-raid.sock 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3612424 ']' 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:51.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.478 11:01:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.478 [2024-07-25 11:01:58.539797] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:51.478 [2024-07-25 11:01:58.539918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3612424 ] 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:01.0 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:01.1 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:01.2 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:01.3 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:01.4 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:01.5 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:01.6 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:01.7 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:02.0 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:02.1 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:02.2 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:02.3 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:02.4 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:02.5 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:02.6 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3d:02.7 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:01.0 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:01.1 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:01.2 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:01.3 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:01.4 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:01.5 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:01.6 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:01.7 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:02.0 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:02.1 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:02.2 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:02.3 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:02.4 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:02.5 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:02.6 cannot be used 00:19:51.762 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:19:51.762 EAL: Requested device 0000:3f:02.7 cannot be used 00:19:51.762 [2024-07-25 11:01:58.764813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.020 [2024-07-25 11:01:59.048380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.278 [2024-07-25 11:01:59.396147] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:52.278 [2024-07-25 11:01:59.396188] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:52.537 11:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.537 11:01:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:19:52.537 11:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:52.537 11:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:52.795 BaseBdev1_malloc 00:19:52.795 11:01:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:53.052 true 00:19:53.052 11:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:53.310 [2024-07-25 11:02:00.298014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:53.310 [2024-07-25 11:02:00.298076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:53.310 [2024-07-25 11:02:00.298104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:19:53.310 [2024-07-25 11:02:00.298126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:53.310 [2024-07-25 11:02:00.300934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:53.310 [2024-07-25 11:02:00.300975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:53.310 BaseBdev1 00:19:53.310 11:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:53.310 11:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:53.569 BaseBdev2_malloc 00:19:53.569 11:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:53.827 true 00:19:53.827 11:02:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:54.085 [2024-07-25 11:02:01.020160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:54.085 [2024-07-25 11:02:01.020220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.085 [2024-07-25 11:02:01.020246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:19:54.085 [2024-07-25 11:02:01.020266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.085 [2024-07-25 11:02:01.022995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.086 [2024-07-25 11:02:01.023032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:54.086 BaseBdev2 00:19:54.086 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:19:54.086 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:54.344 BaseBdev3_malloc 00:19:54.344 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:54.605 true 00:19:54.605 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:54.864 [2024-07-25 11:02:01.723981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:54.864 [2024-07-25 11:02:01.724040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:54.864 [2024-07-25 11:02:01.724066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:19:54.864 [2024-07-25 11:02:01.724084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:54.864 [2024-07-25 11:02:01.726843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:54.864 [2024-07-25 11:02:01.726880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:54.864 BaseBdev3 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:54.864 [2024-07-25 11:02:01.948632] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:54.864 [2024-07-25 11:02:01.951005] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:54.864 [2024-07-25 11:02:01.951097] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:54.864 [2024-07-25 11:02:01.951373] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:54.864 [2024-07-25 11:02:01.951391] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:54.864 [2024-07-25 11:02:01.951731] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:19:54.864 [2024-07-25 11:02:01.951969] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:54.864 [2024-07-25 11:02:01.951990] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:54.864 [2024-07-25 11:02:01.952248] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.864 11:02:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.122 11:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.122 "name": "raid_bdev1", 00:19:55.122 "uuid": "3e8b2aeb-84b6-4e92-8e93-763216ddf001", 00:19:55.122 "strip_size_kb": 64, 00:19:55.122 "state": "online", 00:19:55.122 "raid_level": "concat", 00:19:55.122 "superblock": true, 00:19:55.122 "num_base_bdevs": 3, 00:19:55.122 "num_base_bdevs_discovered": 3, 00:19:55.122 "num_base_bdevs_operational": 3, 00:19:55.122 "base_bdevs_list": [ 00:19:55.122 { 00:19:55.122 "name": "BaseBdev1", 00:19:55.122 "uuid": "e1c476d0-c12f-5755-8347-047a2f8688db", 00:19:55.122 "is_configured": true, 00:19:55.122 "data_offset": 2048, 00:19:55.122 "data_size": 63488 00:19:55.122 }, 00:19:55.122 { 00:19:55.122 "name": "BaseBdev2", 00:19:55.122 "uuid": "b063fefd-7abd-509f-ac5d-4ef73cdf4cbe", 00:19:55.122 "is_configured": true, 00:19:55.122 "data_offset": 2048, 00:19:55.122 "data_size": 63488 00:19:55.122 }, 00:19:55.122 { 00:19:55.122 "name": "BaseBdev3", 00:19:55.122 "uuid": "2c616943-0b26-5ac0-9741-a8b9ae568949", 00:19:55.122 "is_configured": true, 00:19:55.122 "data_offset": 2048, 00:19:55.122 "data_size": 63488 00:19:55.122 } 00:19:55.122 ] 00:19:55.122 }' 00:19:55.122 11:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.122 11:02:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:55.693 11:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:19:55.693 11:02:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:55.952 [2024-07-25 11:02:02.881095] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:19:56.886 11:02:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:57.144 "name": "raid_bdev1", 00:19:57.144 "uuid": "3e8b2aeb-84b6-4e92-8e93-763216ddf001", 00:19:57.144 "strip_size_kb": 64, 00:19:57.144 "state": "online", 00:19:57.144 "raid_level": "concat", 00:19:57.144 "superblock": true, 00:19:57.144 "num_base_bdevs": 3, 00:19:57.144 "num_base_bdevs_discovered": 3, 00:19:57.144 "num_base_bdevs_operational": 3, 00:19:57.144 "base_bdevs_list": [ 00:19:57.144 { 00:19:57.144 "name": "BaseBdev1", 00:19:57.144 "uuid": "e1c476d0-c12f-5755-8347-047a2f8688db", 00:19:57.144 "is_configured": true, 00:19:57.144 "data_offset": 2048, 00:19:57.144 "data_size": 63488 00:19:57.144 }, 00:19:57.144 { 00:19:57.144 "name": "BaseBdev2", 00:19:57.144 "uuid": "b063fefd-7abd-509f-ac5d-4ef73cdf4cbe", 00:19:57.144 "is_configured": true, 00:19:57.144 "data_offset": 2048, 00:19:57.144 "data_size": 63488 00:19:57.144 }, 00:19:57.144 { 00:19:57.144 "name": "BaseBdev3", 00:19:57.144 "uuid": "2c616943-0b26-5ac0-9741-a8b9ae568949", 00:19:57.144 "is_configured": true, 00:19:57.144 "data_offset": 2048, 00:19:57.144 "data_size": 63488 00:19:57.144 } 00:19:57.144 ] 00:19:57.144 }' 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:57.144 11:02:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.710 11:02:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:57.969 [2024-07-25 11:02:05.020949] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.969 [2024-07-25 11:02:05.020991] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.969 [2024-07-25 11:02:05.024278] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.969 [2024-07-25 11:02:05.024328] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.969 [2024-07-25 11:02:05.024375] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.969 [2024-07-25 11:02:05.024393] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:57.969 0 00:19:57.969 11:02:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3612424 00:19:57.969 11:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3612424 ']' 00:19:57.969 11:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3612424 00:19:57.969 11:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:19:57.969 11:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:57.969 11:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3612424 00:19:58.228 11:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:58.228 11:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:58.228 11:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3612424' 00:19:58.228 killing process with pid 3612424 00:19:58.228 11:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3612424 00:19:58.228 [2024-07-25 11:02:05.093023] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:58.228 11:02:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3612424 00:19:58.228 [2024-07-25 11:02:05.321534] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.129 11:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:20:00.129 11:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:20:00.129 11:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.UDgQLkiM1e 00:20:00.129 11:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:20:00.129 11:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:20:00.129 11:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:00.129 11:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:00.129 11:02:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:20:00.129 00:20:00.129 real 0m8.670s 00:20:00.129 user 0m12.257s 00:20:00.129 sys 0m1.320s 00:20:00.129 11:02:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:00.129 11:02:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.129 ************************************ 00:20:00.129 END TEST raid_read_error_test 00:20:00.129 ************************************ 00:20:00.129 11:02:07 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:20:00.129 11:02:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:00.129 11:02:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:00.129 11:02:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:00.129 ************************************ 00:20:00.129 START TEST raid_write_error_test 00:20:00.130 ************************************ 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.Bcp6K1eH8X 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3614082 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3614082 /var/tmp/spdk-raid.sock 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3614082 ']' 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:00.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.130 11:02:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:00.389 [2024-07-25 11:02:07.292741] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:00.389 [2024-07-25 11:02:07.292858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3614082 ] 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:00.389 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:00.389 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:00.648 [2024-07-25 11:02:07.515815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.908 [2024-07-25 11:02:07.794308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.189 [2024-07-25 11:02:08.140021] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.189 [2024-07-25 11:02:08.140069] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.447 11:02:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.447 11:02:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:20:01.447 11:02:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:01.447 11:02:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:01.705 BaseBdev1_malloc 00:20:01.705 11:02:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:01.705 true 00:20:01.705 11:02:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:01.963 [2024-07-25 11:02:09.029665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:01.963 [2024-07-25 11:02:09.029732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.963 [2024-07-25 11:02:09.029761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:20:01.963 [2024-07-25 11:02:09.029783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.963 [2024-07-25 11:02:09.032625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.963 [2024-07-25 11:02:09.032664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:01.963 BaseBdev1 00:20:01.963 11:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:01.963 11:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:02.529 BaseBdev2_malloc 00:20:02.529 11:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:02.788 true 00:20:02.788 11:02:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:03.046 [2024-07-25 11:02:10.048808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:03.046 [2024-07-25 11:02:10.048881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.046 [2024-07-25 11:02:10.048910] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:20:03.046 [2024-07-25 11:02:10.048932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.046 [2024-07-25 11:02:10.051787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.046 [2024-07-25 11:02:10.051828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:03.046 BaseBdev2 00:20:03.046 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:20:03.046 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:03.304 BaseBdev3_malloc 00:20:03.304 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:03.565 true 00:20:03.565 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:03.824 [2024-07-25 11:02:10.726755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:03.824 [2024-07-25 11:02:10.726818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.824 [2024-07-25 11:02:10.726844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:20:03.824 [2024-07-25 11:02:10.726863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.824 [2024-07-25 11:02:10.729638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.824 [2024-07-25 11:02:10.729673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:03.824 BaseBdev3 00:20:03.824 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:04.084 [2024-07-25 11:02:10.967453] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.084 [2024-07-25 11:02:10.969851] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:04.084 [2024-07-25 11:02:10.969942] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:04.084 [2024-07-25 11:02:10.970210] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:04.084 [2024-07-25 11:02:10.970228] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:04.084 [2024-07-25 11:02:10.970591] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:20:04.084 [2024-07-25 11:02:10.970854] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:04.084 [2024-07-25 11:02:10.970876] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:04.084 [2024-07-25 11:02:10.971119] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:04.084 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:04.084 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:04.084 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:04.084 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:04.084 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:04.084 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:04.084 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.084 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.084 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.084 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.084 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.084 11:02:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.343 11:02:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.343 "name": "raid_bdev1", 00:20:04.343 "uuid": "693711ee-0d35-4ccb-b1a2-e0f620f54dab", 00:20:04.343 "strip_size_kb": 64, 00:20:04.343 "state": "online", 00:20:04.343 "raid_level": "concat", 00:20:04.343 "superblock": true, 00:20:04.343 "num_base_bdevs": 3, 00:20:04.343 "num_base_bdevs_discovered": 3, 00:20:04.343 "num_base_bdevs_operational": 3, 00:20:04.343 "base_bdevs_list": [ 00:20:04.343 { 00:20:04.343 "name": "BaseBdev1", 00:20:04.343 "uuid": "340ae8b8-0d93-5ab5-8d70-88d82e6497e8", 00:20:04.343 "is_configured": true, 00:20:04.343 "data_offset": 2048, 00:20:04.343 "data_size": 63488 00:20:04.343 }, 00:20:04.343 { 00:20:04.343 "name": "BaseBdev2", 00:20:04.343 "uuid": "7f68bd2d-7b71-5119-80a9-ae6fb2fee5e7", 00:20:04.343 "is_configured": true, 00:20:04.343 "data_offset": 2048, 00:20:04.343 "data_size": 63488 00:20:04.343 }, 00:20:04.343 { 00:20:04.343 "name": "BaseBdev3", 00:20:04.343 "uuid": "753730a5-5315-5202-8f50-c3092c0c9fe3", 00:20:04.343 "is_configured": true, 00:20:04.343 "data_offset": 2048, 00:20:04.343 "data_size": 63488 00:20:04.343 } 00:20:04.343 ] 00:20:04.343 }' 00:20:04.343 11:02:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.343 11:02:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:04.910 11:02:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:20:04.910 11:02:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:04.910 [2024-07-25 11:02:11.879927] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:20:05.847 11:02:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.106 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.374 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:06.374 "name": "raid_bdev1", 00:20:06.374 "uuid": "693711ee-0d35-4ccb-b1a2-e0f620f54dab", 00:20:06.374 "strip_size_kb": 64, 00:20:06.374 "state": "online", 00:20:06.374 "raid_level": "concat", 00:20:06.374 "superblock": true, 00:20:06.374 "num_base_bdevs": 3, 00:20:06.374 "num_base_bdevs_discovered": 3, 00:20:06.374 "num_base_bdevs_operational": 3, 00:20:06.374 "base_bdevs_list": [ 00:20:06.374 { 00:20:06.374 "name": "BaseBdev1", 00:20:06.374 "uuid": "340ae8b8-0d93-5ab5-8d70-88d82e6497e8", 00:20:06.374 "is_configured": true, 00:20:06.374 "data_offset": 2048, 00:20:06.374 "data_size": 63488 00:20:06.374 }, 00:20:06.374 { 00:20:06.374 "name": "BaseBdev2", 00:20:06.374 "uuid": "7f68bd2d-7b71-5119-80a9-ae6fb2fee5e7", 00:20:06.374 "is_configured": true, 00:20:06.374 "data_offset": 2048, 00:20:06.374 "data_size": 63488 00:20:06.374 }, 00:20:06.374 { 00:20:06.374 "name": "BaseBdev3", 00:20:06.374 "uuid": "753730a5-5315-5202-8f50-c3092c0c9fe3", 00:20:06.374 "is_configured": true, 00:20:06.374 "data_offset": 2048, 00:20:06.374 "data_size": 63488 00:20:06.374 } 00:20:06.374 ] 00:20:06.374 }' 00:20:06.374 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:06.374 11:02:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:06.947 11:02:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:06.947 [2024-07-25 11:02:13.996635] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:06.947 [2024-07-25 11:02:13.996673] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.947 [2024-07-25 11:02:13.999969] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.947 [2024-07-25 11:02:14.000024] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.947 [2024-07-25 11:02:14.000073] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.947 [2024-07-25 11:02:14.000089] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:06.947 0 00:20:06.947 11:02:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3614082 00:20:06.947 11:02:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3614082 ']' 00:20:06.947 11:02:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3614082 00:20:06.947 11:02:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:20:06.947 11:02:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:06.947 11:02:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3614082 00:20:07.206 11:02:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:07.206 11:02:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:07.206 11:02:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3614082' 00:20:07.206 killing process with pid 3614082 00:20:07.206 11:02:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3614082 00:20:07.206 [2024-07-25 11:02:14.075845] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:07.206 11:02:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3614082 00:20:07.206 [2024-07-25 11:02:14.291979] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:09.107 11:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.Bcp6K1eH8X 00:20:09.107 11:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:20:09.107 11:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:20:09.107 11:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:20:09.107 11:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:20:09.107 11:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:09.107 11:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:20:09.107 11:02:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:20:09.107 00:20:09.107 real 0m8.885s 00:20:09.107 user 0m12.623s 00:20:09.107 sys 0m1.384s 00:20:09.107 11:02:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:09.107 11:02:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.107 ************************************ 00:20:09.107 END TEST raid_write_error_test 00:20:09.107 ************************************ 00:20:09.107 11:02:16 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:20:09.107 11:02:16 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:20:09.107 11:02:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:09.107 11:02:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:09.107 11:02:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:09.107 ************************************ 00:20:09.107 START TEST raid_state_function_test 00:20:09.107 ************************************ 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3615526 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3615526' 00:20:09.107 Process raid pid: 3615526 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3615526 /var/tmp/spdk-raid.sock 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3615526 ']' 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:09.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.107 11:02:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.367 [2024-07-25 11:02:16.250991] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:09.367 [2024-07-25 11:02:16.251104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:09.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:09.367 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:09.367 [2024-07-25 11:02:16.479368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.934 [2024-07-25 11:02:16.760689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.192 [2024-07-25 11:02:17.090384] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.192 [2024-07-25 11:02:17.090421] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:10.192 11:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.192 11:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:20:10.192 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:10.450 [2024-07-25 11:02:17.483574] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:10.450 [2024-07-25 11:02:17.483634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:10.450 [2024-07-25 11:02:17.483649] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.450 [2024-07-25 11:02:17.483666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.450 [2024-07-25 11:02:17.483677] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:10.450 [2024-07-25 11:02:17.483693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:10.450 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:10.450 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:10.450 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:10.450 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:10.450 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:10.450 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:10.450 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:10.450 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:10.450 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:10.450 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:10.450 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.450 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.710 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.710 "name": "Existed_Raid", 00:20:10.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.710 "strip_size_kb": 0, 00:20:10.710 "state": "configuring", 00:20:10.710 "raid_level": "raid1", 00:20:10.710 "superblock": false, 00:20:10.710 "num_base_bdevs": 3, 00:20:10.710 "num_base_bdevs_discovered": 0, 00:20:10.710 "num_base_bdevs_operational": 3, 00:20:10.710 "base_bdevs_list": [ 00:20:10.710 { 00:20:10.710 "name": "BaseBdev1", 00:20:10.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.710 "is_configured": false, 00:20:10.710 "data_offset": 0, 00:20:10.710 "data_size": 0 00:20:10.710 }, 00:20:10.710 { 00:20:10.710 "name": "BaseBdev2", 00:20:10.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.710 "is_configured": false, 00:20:10.710 "data_offset": 0, 00:20:10.710 "data_size": 0 00:20:10.710 }, 00:20:10.710 { 00:20:10.710 "name": "BaseBdev3", 00:20:10.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.710 "is_configured": false, 00:20:10.710 "data_offset": 0, 00:20:10.710 "data_size": 0 00:20:10.710 } 00:20:10.710 ] 00:20:10.710 }' 00:20:10.710 11:02:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.710 11:02:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.278 11:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:11.536 [2024-07-25 11:02:18.522260] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:11.536 [2024-07-25 11:02:18.522308] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:11.536 11:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:11.794 [2024-07-25 11:02:18.746895] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:11.794 [2024-07-25 11:02:18.746945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:11.794 [2024-07-25 11:02:18.746959] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:11.794 [2024-07-25 11:02:18.746979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:11.794 [2024-07-25 11:02:18.746990] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:11.794 [2024-07-25 11:02:18.747007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:11.794 11:02:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:12.053 [2024-07-25 11:02:19.018572] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:12.053 BaseBdev1 00:20:12.053 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:12.053 11:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:12.053 11:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:12.053 11:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:12.053 11:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:12.053 11:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:12.053 11:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:12.311 11:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:12.570 [ 00:20:12.570 { 00:20:12.570 "name": "BaseBdev1", 00:20:12.570 "aliases": [ 00:20:12.570 "18df6eb4-c701-4fac-8b18-429a4bb100fc" 00:20:12.570 ], 00:20:12.570 "product_name": "Malloc disk", 00:20:12.570 "block_size": 512, 00:20:12.570 "num_blocks": 65536, 00:20:12.570 "uuid": "18df6eb4-c701-4fac-8b18-429a4bb100fc", 00:20:12.570 "assigned_rate_limits": { 00:20:12.570 "rw_ios_per_sec": 0, 00:20:12.570 "rw_mbytes_per_sec": 0, 00:20:12.570 "r_mbytes_per_sec": 0, 00:20:12.570 "w_mbytes_per_sec": 0 00:20:12.570 }, 00:20:12.570 "claimed": true, 00:20:12.570 "claim_type": "exclusive_write", 00:20:12.570 "zoned": false, 00:20:12.570 "supported_io_types": { 00:20:12.570 "read": true, 00:20:12.570 "write": true, 00:20:12.570 "unmap": true, 00:20:12.570 "flush": true, 00:20:12.570 "reset": true, 00:20:12.570 "nvme_admin": false, 00:20:12.570 "nvme_io": false, 00:20:12.570 "nvme_io_md": false, 00:20:12.570 "write_zeroes": true, 00:20:12.570 "zcopy": true, 00:20:12.570 "get_zone_info": false, 00:20:12.570 "zone_management": false, 00:20:12.570 "zone_append": false, 00:20:12.570 "compare": false, 00:20:12.570 "compare_and_write": false, 00:20:12.570 "abort": true, 00:20:12.570 "seek_hole": false, 00:20:12.570 "seek_data": false, 00:20:12.570 "copy": true, 00:20:12.570 "nvme_iov_md": false 00:20:12.570 }, 00:20:12.571 "memory_domains": [ 00:20:12.571 { 00:20:12.571 "dma_device_id": "system", 00:20:12.571 "dma_device_type": 1 00:20:12.571 }, 00:20:12.571 { 00:20:12.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:12.571 "dma_device_type": 2 00:20:12.571 } 00:20:12.571 ], 00:20:12.571 "driver_specific": {} 00:20:12.571 } 00:20:12.571 ] 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.571 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.830 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:12.830 "name": "Existed_Raid", 00:20:12.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.830 "strip_size_kb": 0, 00:20:12.830 "state": "configuring", 00:20:12.830 "raid_level": "raid1", 00:20:12.830 "superblock": false, 00:20:12.830 "num_base_bdevs": 3, 00:20:12.830 "num_base_bdevs_discovered": 1, 00:20:12.830 "num_base_bdevs_operational": 3, 00:20:12.830 "base_bdevs_list": [ 00:20:12.830 { 00:20:12.830 "name": "BaseBdev1", 00:20:12.830 "uuid": "18df6eb4-c701-4fac-8b18-429a4bb100fc", 00:20:12.830 "is_configured": true, 00:20:12.830 "data_offset": 0, 00:20:12.830 "data_size": 65536 00:20:12.830 }, 00:20:12.830 { 00:20:12.830 "name": "BaseBdev2", 00:20:12.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.830 "is_configured": false, 00:20:12.830 "data_offset": 0, 00:20:12.830 "data_size": 0 00:20:12.830 }, 00:20:12.830 { 00:20:12.830 "name": "BaseBdev3", 00:20:12.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.830 "is_configured": false, 00:20:12.830 "data_offset": 0, 00:20:12.830 "data_size": 0 00:20:12.830 } 00:20:12.830 ] 00:20:12.830 }' 00:20:12.830 11:02:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:12.830 11:02:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.399 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:13.399 [2024-07-25 11:02:20.514682] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:13.399 [2024-07-25 11:02:20.514743] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:13.658 [2024-07-25 11:02:20.743374] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:13.658 [2024-07-25 11:02:20.745709] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:13.658 [2024-07-25 11:02:20.745756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:13.658 [2024-07-25 11:02:20.745771] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:13.658 [2024-07-25 11:02:20.745788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.658 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.917 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.917 "name": "Existed_Raid", 00:20:13.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.917 "strip_size_kb": 0, 00:20:13.917 "state": "configuring", 00:20:13.917 "raid_level": "raid1", 00:20:13.917 "superblock": false, 00:20:13.917 "num_base_bdevs": 3, 00:20:13.917 "num_base_bdevs_discovered": 1, 00:20:13.917 "num_base_bdevs_operational": 3, 00:20:13.917 "base_bdevs_list": [ 00:20:13.917 { 00:20:13.917 "name": "BaseBdev1", 00:20:13.917 "uuid": "18df6eb4-c701-4fac-8b18-429a4bb100fc", 00:20:13.917 "is_configured": true, 00:20:13.917 "data_offset": 0, 00:20:13.917 "data_size": 65536 00:20:13.917 }, 00:20:13.917 { 00:20:13.917 "name": "BaseBdev2", 00:20:13.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.917 "is_configured": false, 00:20:13.917 "data_offset": 0, 00:20:13.917 "data_size": 0 00:20:13.917 }, 00:20:13.917 { 00:20:13.917 "name": "BaseBdev3", 00:20:13.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.917 "is_configured": false, 00:20:13.917 "data_offset": 0, 00:20:13.917 "data_size": 0 00:20:13.917 } 00:20:13.917 ] 00:20:13.917 }' 00:20:13.917 11:02:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.917 11:02:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.551 11:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:14.811 [2024-07-25 11:02:21.819498] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:14.811 BaseBdev2 00:20:14.811 11:02:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:14.811 11:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:14.811 11:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:14.811 11:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:14.811 11:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:14.811 11:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:14.811 11:02:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:15.071 11:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:15.330 [ 00:20:15.330 { 00:20:15.330 "name": "BaseBdev2", 00:20:15.330 "aliases": [ 00:20:15.330 "68af54d9-add2-4f21-b489-042fc91e1166" 00:20:15.330 ], 00:20:15.330 "product_name": "Malloc disk", 00:20:15.330 "block_size": 512, 00:20:15.330 "num_blocks": 65536, 00:20:15.330 "uuid": "68af54d9-add2-4f21-b489-042fc91e1166", 00:20:15.330 "assigned_rate_limits": { 00:20:15.330 "rw_ios_per_sec": 0, 00:20:15.330 "rw_mbytes_per_sec": 0, 00:20:15.330 "r_mbytes_per_sec": 0, 00:20:15.330 "w_mbytes_per_sec": 0 00:20:15.330 }, 00:20:15.330 "claimed": true, 00:20:15.330 "claim_type": "exclusive_write", 00:20:15.330 "zoned": false, 00:20:15.330 "supported_io_types": { 00:20:15.330 "read": true, 00:20:15.330 "write": true, 00:20:15.330 "unmap": true, 00:20:15.330 "flush": true, 00:20:15.330 "reset": true, 00:20:15.330 "nvme_admin": false, 00:20:15.330 "nvme_io": false, 00:20:15.330 "nvme_io_md": false, 00:20:15.330 "write_zeroes": true, 00:20:15.330 "zcopy": true, 00:20:15.330 "get_zone_info": false, 00:20:15.330 "zone_management": false, 00:20:15.330 "zone_append": false, 00:20:15.330 "compare": false, 00:20:15.330 "compare_and_write": false, 00:20:15.330 "abort": true, 00:20:15.330 "seek_hole": false, 00:20:15.330 "seek_data": false, 00:20:15.330 "copy": true, 00:20:15.330 "nvme_iov_md": false 00:20:15.330 }, 00:20:15.330 "memory_domains": [ 00:20:15.330 { 00:20:15.330 "dma_device_id": "system", 00:20:15.330 "dma_device_type": 1 00:20:15.330 }, 00:20:15.330 { 00:20:15.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.330 "dma_device_type": 2 00:20:15.330 } 00:20:15.330 ], 00:20:15.330 "driver_specific": {} 00:20:15.330 } 00:20:15.330 ] 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.330 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.589 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:15.589 "name": "Existed_Raid", 00:20:15.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.589 "strip_size_kb": 0, 00:20:15.589 "state": "configuring", 00:20:15.589 "raid_level": "raid1", 00:20:15.589 "superblock": false, 00:20:15.589 "num_base_bdevs": 3, 00:20:15.589 "num_base_bdevs_discovered": 2, 00:20:15.589 "num_base_bdevs_operational": 3, 00:20:15.589 "base_bdevs_list": [ 00:20:15.589 { 00:20:15.589 "name": "BaseBdev1", 00:20:15.589 "uuid": "18df6eb4-c701-4fac-8b18-429a4bb100fc", 00:20:15.589 "is_configured": true, 00:20:15.589 "data_offset": 0, 00:20:15.589 "data_size": 65536 00:20:15.589 }, 00:20:15.589 { 00:20:15.589 "name": "BaseBdev2", 00:20:15.589 "uuid": "68af54d9-add2-4f21-b489-042fc91e1166", 00:20:15.589 "is_configured": true, 00:20:15.589 "data_offset": 0, 00:20:15.589 "data_size": 65536 00:20:15.589 }, 00:20:15.589 { 00:20:15.589 "name": "BaseBdev3", 00:20:15.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.589 "is_configured": false, 00:20:15.589 "data_offset": 0, 00:20:15.589 "data_size": 0 00:20:15.589 } 00:20:15.589 ] 00:20:15.589 }' 00:20:15.589 11:02:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:15.589 11:02:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.157 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:16.417 [2024-07-25 11:02:23.341758] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:16.417 [2024-07-25 11:02:23.341812] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:16.417 [2024-07-25 11:02:23.341831] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:16.417 [2024-07-25 11:02:23.342182] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:20:16.417 [2024-07-25 11:02:23.342452] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:16.417 [2024-07-25 11:02:23.342468] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:16.417 [2024-07-25 11:02:23.342809] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.417 BaseBdev3 00:20:16.417 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:16.417 11:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:16.417 11:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:16.417 11:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:16.417 11:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:16.417 11:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:16.418 11:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:16.677 11:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:16.937 [ 00:20:16.937 { 00:20:16.937 "name": "BaseBdev3", 00:20:16.937 "aliases": [ 00:20:16.937 "13863649-07dd-4b7a-ba09-35df0d341a54" 00:20:16.937 ], 00:20:16.937 "product_name": "Malloc disk", 00:20:16.937 "block_size": 512, 00:20:16.937 "num_blocks": 65536, 00:20:16.937 "uuid": "13863649-07dd-4b7a-ba09-35df0d341a54", 00:20:16.937 "assigned_rate_limits": { 00:20:16.937 "rw_ios_per_sec": 0, 00:20:16.937 "rw_mbytes_per_sec": 0, 00:20:16.937 "r_mbytes_per_sec": 0, 00:20:16.937 "w_mbytes_per_sec": 0 00:20:16.937 }, 00:20:16.937 "claimed": true, 00:20:16.937 "claim_type": "exclusive_write", 00:20:16.937 "zoned": false, 00:20:16.937 "supported_io_types": { 00:20:16.937 "read": true, 00:20:16.937 "write": true, 00:20:16.937 "unmap": true, 00:20:16.937 "flush": true, 00:20:16.937 "reset": true, 00:20:16.937 "nvme_admin": false, 00:20:16.937 "nvme_io": false, 00:20:16.937 "nvme_io_md": false, 00:20:16.937 "write_zeroes": true, 00:20:16.937 "zcopy": true, 00:20:16.937 "get_zone_info": false, 00:20:16.937 "zone_management": false, 00:20:16.937 "zone_append": false, 00:20:16.937 "compare": false, 00:20:16.937 "compare_and_write": false, 00:20:16.937 "abort": true, 00:20:16.937 "seek_hole": false, 00:20:16.937 "seek_data": false, 00:20:16.937 "copy": true, 00:20:16.937 "nvme_iov_md": false 00:20:16.937 }, 00:20:16.937 "memory_domains": [ 00:20:16.937 { 00:20:16.937 "dma_device_id": "system", 00:20:16.937 "dma_device_type": 1 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:16.937 "dma_device_type": 2 00:20:16.937 } 00:20:16.937 ], 00:20:16.937 "driver_specific": {} 00:20:16.937 } 00:20:16.937 ] 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.937 11:02:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.937 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.937 "name": "Existed_Raid", 00:20:16.937 "uuid": "feea80da-de4b-4e57-bae7-d3cb20368b7d", 00:20:16.937 "strip_size_kb": 0, 00:20:16.937 "state": "online", 00:20:16.937 "raid_level": "raid1", 00:20:16.937 "superblock": false, 00:20:16.937 "num_base_bdevs": 3, 00:20:16.937 "num_base_bdevs_discovered": 3, 00:20:16.937 "num_base_bdevs_operational": 3, 00:20:16.937 "base_bdevs_list": [ 00:20:16.937 { 00:20:16.937 "name": "BaseBdev1", 00:20:16.937 "uuid": "18df6eb4-c701-4fac-8b18-429a4bb100fc", 00:20:16.937 "is_configured": true, 00:20:16.937 "data_offset": 0, 00:20:16.937 "data_size": 65536 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "name": "BaseBdev2", 00:20:16.937 "uuid": "68af54d9-add2-4f21-b489-042fc91e1166", 00:20:16.937 "is_configured": true, 00:20:16.937 "data_offset": 0, 00:20:16.937 "data_size": 65536 00:20:16.937 }, 00:20:16.937 { 00:20:16.937 "name": "BaseBdev3", 00:20:16.937 "uuid": "13863649-07dd-4b7a-ba09-35df0d341a54", 00:20:16.937 "is_configured": true, 00:20:16.937 "data_offset": 0, 00:20:16.937 "data_size": 65536 00:20:16.937 } 00:20:16.937 ] 00:20:16.937 }' 00:20:16.937 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.937 11:02:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.505 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:17.505 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:17.505 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:17.505 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:17.505 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:17.505 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:17.505 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:17.505 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:17.765 [2024-07-25 11:02:24.814187] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.765 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:17.765 "name": "Existed_Raid", 00:20:17.765 "aliases": [ 00:20:17.765 "feea80da-de4b-4e57-bae7-d3cb20368b7d" 00:20:17.765 ], 00:20:17.765 "product_name": "Raid Volume", 00:20:17.765 "block_size": 512, 00:20:17.765 "num_blocks": 65536, 00:20:17.765 "uuid": "feea80da-de4b-4e57-bae7-d3cb20368b7d", 00:20:17.765 "assigned_rate_limits": { 00:20:17.765 "rw_ios_per_sec": 0, 00:20:17.765 "rw_mbytes_per_sec": 0, 00:20:17.765 "r_mbytes_per_sec": 0, 00:20:17.765 "w_mbytes_per_sec": 0 00:20:17.765 }, 00:20:17.765 "claimed": false, 00:20:17.765 "zoned": false, 00:20:17.765 "supported_io_types": { 00:20:17.765 "read": true, 00:20:17.765 "write": true, 00:20:17.765 "unmap": false, 00:20:17.765 "flush": false, 00:20:17.765 "reset": true, 00:20:17.765 "nvme_admin": false, 00:20:17.765 "nvme_io": false, 00:20:17.765 "nvme_io_md": false, 00:20:17.765 "write_zeroes": true, 00:20:17.765 "zcopy": false, 00:20:17.765 "get_zone_info": false, 00:20:17.765 "zone_management": false, 00:20:17.765 "zone_append": false, 00:20:17.765 "compare": false, 00:20:17.765 "compare_and_write": false, 00:20:17.765 "abort": false, 00:20:17.765 "seek_hole": false, 00:20:17.765 "seek_data": false, 00:20:17.765 "copy": false, 00:20:17.765 "nvme_iov_md": false 00:20:17.765 }, 00:20:17.765 "memory_domains": [ 00:20:17.765 { 00:20:17.765 "dma_device_id": "system", 00:20:17.765 "dma_device_type": 1 00:20:17.765 }, 00:20:17.765 { 00:20:17.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.765 "dma_device_type": 2 00:20:17.765 }, 00:20:17.765 { 00:20:17.765 "dma_device_id": "system", 00:20:17.765 "dma_device_type": 1 00:20:17.765 }, 00:20:17.765 { 00:20:17.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.765 "dma_device_type": 2 00:20:17.765 }, 00:20:17.765 { 00:20:17.765 "dma_device_id": "system", 00:20:17.765 "dma_device_type": 1 00:20:17.765 }, 00:20:17.765 { 00:20:17.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.765 "dma_device_type": 2 00:20:17.765 } 00:20:17.765 ], 00:20:17.765 "driver_specific": { 00:20:17.765 "raid": { 00:20:17.765 "uuid": "feea80da-de4b-4e57-bae7-d3cb20368b7d", 00:20:17.765 "strip_size_kb": 0, 00:20:17.765 "state": "online", 00:20:17.765 "raid_level": "raid1", 00:20:17.765 "superblock": false, 00:20:17.765 "num_base_bdevs": 3, 00:20:17.765 "num_base_bdevs_discovered": 3, 00:20:17.765 "num_base_bdevs_operational": 3, 00:20:17.765 "base_bdevs_list": [ 00:20:17.765 { 00:20:17.765 "name": "BaseBdev1", 00:20:17.765 "uuid": "18df6eb4-c701-4fac-8b18-429a4bb100fc", 00:20:17.765 "is_configured": true, 00:20:17.765 "data_offset": 0, 00:20:17.765 "data_size": 65536 00:20:17.765 }, 00:20:17.765 { 00:20:17.765 "name": "BaseBdev2", 00:20:17.765 "uuid": "68af54d9-add2-4f21-b489-042fc91e1166", 00:20:17.765 "is_configured": true, 00:20:17.765 "data_offset": 0, 00:20:17.765 "data_size": 65536 00:20:17.765 }, 00:20:17.765 { 00:20:17.765 "name": "BaseBdev3", 00:20:17.765 "uuid": "13863649-07dd-4b7a-ba09-35df0d341a54", 00:20:17.765 "is_configured": true, 00:20:17.766 "data_offset": 0, 00:20:17.766 "data_size": 65536 00:20:17.766 } 00:20:17.766 ] 00:20:17.766 } 00:20:17.766 } 00:20:17.766 }' 00:20:17.766 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:17.766 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:17.766 BaseBdev2 00:20:17.766 BaseBdev3' 00:20:17.766 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:17.766 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:17.766 11:02:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:18.025 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:18.025 "name": "BaseBdev1", 00:20:18.025 "aliases": [ 00:20:18.025 "18df6eb4-c701-4fac-8b18-429a4bb100fc" 00:20:18.025 ], 00:20:18.025 "product_name": "Malloc disk", 00:20:18.025 "block_size": 512, 00:20:18.025 "num_blocks": 65536, 00:20:18.025 "uuid": "18df6eb4-c701-4fac-8b18-429a4bb100fc", 00:20:18.025 "assigned_rate_limits": { 00:20:18.025 "rw_ios_per_sec": 0, 00:20:18.025 "rw_mbytes_per_sec": 0, 00:20:18.025 "r_mbytes_per_sec": 0, 00:20:18.025 "w_mbytes_per_sec": 0 00:20:18.025 }, 00:20:18.025 "claimed": true, 00:20:18.025 "claim_type": "exclusive_write", 00:20:18.025 "zoned": false, 00:20:18.025 "supported_io_types": { 00:20:18.025 "read": true, 00:20:18.025 "write": true, 00:20:18.025 "unmap": true, 00:20:18.025 "flush": true, 00:20:18.025 "reset": true, 00:20:18.025 "nvme_admin": false, 00:20:18.025 "nvme_io": false, 00:20:18.025 "nvme_io_md": false, 00:20:18.025 "write_zeroes": true, 00:20:18.025 "zcopy": true, 00:20:18.025 "get_zone_info": false, 00:20:18.025 "zone_management": false, 00:20:18.025 "zone_append": false, 00:20:18.025 "compare": false, 00:20:18.025 "compare_and_write": false, 00:20:18.025 "abort": true, 00:20:18.025 "seek_hole": false, 00:20:18.025 "seek_data": false, 00:20:18.025 "copy": true, 00:20:18.025 "nvme_iov_md": false 00:20:18.025 }, 00:20:18.025 "memory_domains": [ 00:20:18.025 { 00:20:18.025 "dma_device_id": "system", 00:20:18.025 "dma_device_type": 1 00:20:18.025 }, 00:20:18.025 { 00:20:18.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.025 "dma_device_type": 2 00:20:18.025 } 00:20:18.025 ], 00:20:18.025 "driver_specific": {} 00:20:18.025 }' 00:20:18.025 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.284 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.284 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:18.284 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.284 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.284 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:18.284 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.284 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.284 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:18.284 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.284 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:18.543 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:18.543 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:18.543 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:18.543 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:18.802 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:18.802 "name": "BaseBdev2", 00:20:18.802 "aliases": [ 00:20:18.802 "68af54d9-add2-4f21-b489-042fc91e1166" 00:20:18.802 ], 00:20:18.802 "product_name": "Malloc disk", 00:20:18.802 "block_size": 512, 00:20:18.802 "num_blocks": 65536, 00:20:18.802 "uuid": "68af54d9-add2-4f21-b489-042fc91e1166", 00:20:18.802 "assigned_rate_limits": { 00:20:18.802 "rw_ios_per_sec": 0, 00:20:18.802 "rw_mbytes_per_sec": 0, 00:20:18.802 "r_mbytes_per_sec": 0, 00:20:18.802 "w_mbytes_per_sec": 0 00:20:18.802 }, 00:20:18.802 "claimed": true, 00:20:18.802 "claim_type": "exclusive_write", 00:20:18.802 "zoned": false, 00:20:18.802 "supported_io_types": { 00:20:18.802 "read": true, 00:20:18.802 "write": true, 00:20:18.802 "unmap": true, 00:20:18.802 "flush": true, 00:20:18.802 "reset": true, 00:20:18.802 "nvme_admin": false, 00:20:18.802 "nvme_io": false, 00:20:18.802 "nvme_io_md": false, 00:20:18.802 "write_zeroes": true, 00:20:18.802 "zcopy": true, 00:20:18.802 "get_zone_info": false, 00:20:18.802 "zone_management": false, 00:20:18.802 "zone_append": false, 00:20:18.802 "compare": false, 00:20:18.802 "compare_and_write": false, 00:20:18.802 "abort": true, 00:20:18.802 "seek_hole": false, 00:20:18.802 "seek_data": false, 00:20:18.802 "copy": true, 00:20:18.802 "nvme_iov_md": false 00:20:18.802 }, 00:20:18.802 "memory_domains": [ 00:20:18.802 { 00:20:18.802 "dma_device_id": "system", 00:20:18.802 "dma_device_type": 1 00:20:18.802 }, 00:20:18.802 { 00:20:18.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.802 "dma_device_type": 2 00:20:18.802 } 00:20:18.802 ], 00:20:18.802 "driver_specific": {} 00:20:18.802 }' 00:20:18.802 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.802 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:18.802 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:18.802 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.802 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:18.802 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:18.802 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:18.802 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:19.060 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:19.060 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:19.060 11:02:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:19.060 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:19.060 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:19.060 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:19.060 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:19.319 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:19.319 "name": "BaseBdev3", 00:20:19.319 "aliases": [ 00:20:19.319 "13863649-07dd-4b7a-ba09-35df0d341a54" 00:20:19.319 ], 00:20:19.319 "product_name": "Malloc disk", 00:20:19.319 "block_size": 512, 00:20:19.319 "num_blocks": 65536, 00:20:19.319 "uuid": "13863649-07dd-4b7a-ba09-35df0d341a54", 00:20:19.319 "assigned_rate_limits": { 00:20:19.319 "rw_ios_per_sec": 0, 00:20:19.319 "rw_mbytes_per_sec": 0, 00:20:19.319 "r_mbytes_per_sec": 0, 00:20:19.319 "w_mbytes_per_sec": 0 00:20:19.319 }, 00:20:19.319 "claimed": true, 00:20:19.319 "claim_type": "exclusive_write", 00:20:19.319 "zoned": false, 00:20:19.319 "supported_io_types": { 00:20:19.319 "read": true, 00:20:19.319 "write": true, 00:20:19.319 "unmap": true, 00:20:19.319 "flush": true, 00:20:19.319 "reset": true, 00:20:19.319 "nvme_admin": false, 00:20:19.319 "nvme_io": false, 00:20:19.319 "nvme_io_md": false, 00:20:19.319 "write_zeroes": true, 00:20:19.319 "zcopy": true, 00:20:19.319 "get_zone_info": false, 00:20:19.319 "zone_management": false, 00:20:19.319 "zone_append": false, 00:20:19.319 "compare": false, 00:20:19.319 "compare_and_write": false, 00:20:19.319 "abort": true, 00:20:19.319 "seek_hole": false, 00:20:19.319 "seek_data": false, 00:20:19.319 "copy": true, 00:20:19.319 "nvme_iov_md": false 00:20:19.319 }, 00:20:19.319 "memory_domains": [ 00:20:19.319 { 00:20:19.319 "dma_device_id": "system", 00:20:19.319 "dma_device_type": 1 00:20:19.319 }, 00:20:19.319 { 00:20:19.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.319 "dma_device_type": 2 00:20:19.319 } 00:20:19.319 ], 00:20:19.319 "driver_specific": {} 00:20:19.319 }' 00:20:19.319 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:19.319 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:19.319 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:19.319 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:19.319 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:19.319 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:19.319 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:19.578 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:19.578 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:19.578 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:19.578 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:19.578 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:19.578 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:19.838 [2024-07-25 11:02:26.799281] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.838 11:02:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.097 11:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.097 "name": "Existed_Raid", 00:20:20.097 "uuid": "feea80da-de4b-4e57-bae7-d3cb20368b7d", 00:20:20.097 "strip_size_kb": 0, 00:20:20.097 "state": "online", 00:20:20.097 "raid_level": "raid1", 00:20:20.097 "superblock": false, 00:20:20.097 "num_base_bdevs": 3, 00:20:20.097 "num_base_bdevs_discovered": 2, 00:20:20.097 "num_base_bdevs_operational": 2, 00:20:20.097 "base_bdevs_list": [ 00:20:20.097 { 00:20:20.097 "name": null, 00:20:20.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:20.097 "is_configured": false, 00:20:20.097 "data_offset": 0, 00:20:20.097 "data_size": 65536 00:20:20.097 }, 00:20:20.097 { 00:20:20.097 "name": "BaseBdev2", 00:20:20.097 "uuid": "68af54d9-add2-4f21-b489-042fc91e1166", 00:20:20.097 "is_configured": true, 00:20:20.097 "data_offset": 0, 00:20:20.097 "data_size": 65536 00:20:20.097 }, 00:20:20.097 { 00:20:20.097 "name": "BaseBdev3", 00:20:20.097 "uuid": "13863649-07dd-4b7a-ba09-35df0d341a54", 00:20:20.097 "is_configured": true, 00:20:20.097 "data_offset": 0, 00:20:20.097 "data_size": 65536 00:20:20.097 } 00:20:20.097 ] 00:20:20.097 }' 00:20:20.097 11:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.097 11:02:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:20.665 11:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:20.665 11:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:20.665 11:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.665 11:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:20.924 11:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:20.924 11:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:20.924 11:02:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:21.183 [2024-07-25 11:02:28.101919] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:21.183 11:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:21.183 11:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:21.183 11:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.183 11:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:21.442 11:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:21.442 11:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:21.442 11:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:21.701 [2024-07-25 11:02:28.689110] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:21.701 [2024-07-25 11:02:28.689227] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:21.960 [2024-07-25 11:02:28.827802] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:21.960 [2024-07-25 11:02:28.827856] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:21.960 [2024-07-25 11:02:28.827875] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:21.960 11:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:21.960 11:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:21.960 11:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.960 11:02:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:21.960 11:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:21.960 11:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:21.960 11:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:21.960 11:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:21.960 11:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:21.960 11:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:22.219 BaseBdev2 00:20:22.478 11:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:22.478 11:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:22.478 11:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:22.478 11:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:22.478 11:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:22.478 11:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:22.478 11:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:22.478 11:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:22.736 [ 00:20:22.736 { 00:20:22.736 "name": "BaseBdev2", 00:20:22.736 "aliases": [ 00:20:22.736 "bb6ebb04-1b1d-4bc6-8513-233481088487" 00:20:22.736 ], 00:20:22.736 "product_name": "Malloc disk", 00:20:22.736 "block_size": 512, 00:20:22.736 "num_blocks": 65536, 00:20:22.737 "uuid": "bb6ebb04-1b1d-4bc6-8513-233481088487", 00:20:22.737 "assigned_rate_limits": { 00:20:22.737 "rw_ios_per_sec": 0, 00:20:22.737 "rw_mbytes_per_sec": 0, 00:20:22.737 "r_mbytes_per_sec": 0, 00:20:22.737 "w_mbytes_per_sec": 0 00:20:22.737 }, 00:20:22.737 "claimed": false, 00:20:22.737 "zoned": false, 00:20:22.737 "supported_io_types": { 00:20:22.737 "read": true, 00:20:22.737 "write": true, 00:20:22.737 "unmap": true, 00:20:22.737 "flush": true, 00:20:22.737 "reset": true, 00:20:22.737 "nvme_admin": false, 00:20:22.737 "nvme_io": false, 00:20:22.737 "nvme_io_md": false, 00:20:22.737 "write_zeroes": true, 00:20:22.737 "zcopy": true, 00:20:22.737 "get_zone_info": false, 00:20:22.737 "zone_management": false, 00:20:22.737 "zone_append": false, 00:20:22.737 "compare": false, 00:20:22.737 "compare_and_write": false, 00:20:22.737 "abort": true, 00:20:22.737 "seek_hole": false, 00:20:22.737 "seek_data": false, 00:20:22.737 "copy": true, 00:20:22.737 "nvme_iov_md": false 00:20:22.737 }, 00:20:22.737 "memory_domains": [ 00:20:22.737 { 00:20:22.737 "dma_device_id": "system", 00:20:22.737 "dma_device_type": 1 00:20:22.737 }, 00:20:22.737 { 00:20:22.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.737 "dma_device_type": 2 00:20:22.737 } 00:20:22.737 ], 00:20:22.737 "driver_specific": {} 00:20:22.737 } 00:20:22.737 ] 00:20:22.737 11:02:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:22.737 11:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:22.737 11:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:22.737 11:02:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:22.996 BaseBdev3 00:20:22.996 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:22.996 11:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:22.996 11:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:22.996 11:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:22.996 11:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:22.996 11:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:22.996 11:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:23.255 11:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:23.514 [ 00:20:23.514 { 00:20:23.514 "name": "BaseBdev3", 00:20:23.514 "aliases": [ 00:20:23.514 "147f05ac-53fc-49cb-997b-f0bce0be8c80" 00:20:23.514 ], 00:20:23.514 "product_name": "Malloc disk", 00:20:23.514 "block_size": 512, 00:20:23.514 "num_blocks": 65536, 00:20:23.514 "uuid": "147f05ac-53fc-49cb-997b-f0bce0be8c80", 00:20:23.514 "assigned_rate_limits": { 00:20:23.514 "rw_ios_per_sec": 0, 00:20:23.514 "rw_mbytes_per_sec": 0, 00:20:23.514 "r_mbytes_per_sec": 0, 00:20:23.514 "w_mbytes_per_sec": 0 00:20:23.514 }, 00:20:23.514 "claimed": false, 00:20:23.514 "zoned": false, 00:20:23.514 "supported_io_types": { 00:20:23.514 "read": true, 00:20:23.514 "write": true, 00:20:23.514 "unmap": true, 00:20:23.514 "flush": true, 00:20:23.514 "reset": true, 00:20:23.514 "nvme_admin": false, 00:20:23.514 "nvme_io": false, 00:20:23.514 "nvme_io_md": false, 00:20:23.514 "write_zeroes": true, 00:20:23.514 "zcopy": true, 00:20:23.514 "get_zone_info": false, 00:20:23.514 "zone_management": false, 00:20:23.514 "zone_append": false, 00:20:23.514 "compare": false, 00:20:23.514 "compare_and_write": false, 00:20:23.514 "abort": true, 00:20:23.514 "seek_hole": false, 00:20:23.514 "seek_data": false, 00:20:23.514 "copy": true, 00:20:23.514 "nvme_iov_md": false 00:20:23.514 }, 00:20:23.514 "memory_domains": [ 00:20:23.514 { 00:20:23.514 "dma_device_id": "system", 00:20:23.514 "dma_device_type": 1 00:20:23.514 }, 00:20:23.514 { 00:20:23.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.514 "dma_device_type": 2 00:20:23.514 } 00:20:23.514 ], 00:20:23.514 "driver_specific": {} 00:20:23.514 } 00:20:23.514 ] 00:20:23.514 11:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:23.514 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:23.514 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:23.514 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:23.773 [2024-07-25 11:02:30.743085] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:23.773 [2024-07-25 11:02:30.743148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:23.773 [2024-07-25 11:02:30.743179] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.774 [2024-07-25 11:02:30.745492] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:23.774 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:23.774 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:23.774 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:23.774 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:23.774 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:23.774 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:23.774 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:23.774 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:23.774 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:23.774 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:23.774 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:23.774 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.033 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:24.033 "name": "Existed_Raid", 00:20:24.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.033 "strip_size_kb": 0, 00:20:24.033 "state": "configuring", 00:20:24.033 "raid_level": "raid1", 00:20:24.033 "superblock": false, 00:20:24.033 "num_base_bdevs": 3, 00:20:24.033 "num_base_bdevs_discovered": 2, 00:20:24.033 "num_base_bdevs_operational": 3, 00:20:24.033 "base_bdevs_list": [ 00:20:24.033 { 00:20:24.033 "name": "BaseBdev1", 00:20:24.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.033 "is_configured": false, 00:20:24.033 "data_offset": 0, 00:20:24.033 "data_size": 0 00:20:24.033 }, 00:20:24.033 { 00:20:24.033 "name": "BaseBdev2", 00:20:24.033 "uuid": "bb6ebb04-1b1d-4bc6-8513-233481088487", 00:20:24.033 "is_configured": true, 00:20:24.033 "data_offset": 0, 00:20:24.033 "data_size": 65536 00:20:24.033 }, 00:20:24.033 { 00:20:24.033 "name": "BaseBdev3", 00:20:24.033 "uuid": "147f05ac-53fc-49cb-997b-f0bce0be8c80", 00:20:24.033 "is_configured": true, 00:20:24.033 "data_offset": 0, 00:20:24.033 "data_size": 65536 00:20:24.033 } 00:20:24.033 ] 00:20:24.033 }' 00:20:24.033 11:02:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:24.033 11:02:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:24.602 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:24.861 [2024-07-25 11:02:31.761851] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:24.861 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:24.861 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:24.861 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:24.861 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:24.861 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:24.861 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:24.861 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:24.861 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:24.861 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:24.861 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:24.861 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:24.861 11:02:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.120 11:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:25.120 "name": "Existed_Raid", 00:20:25.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.120 "strip_size_kb": 0, 00:20:25.120 "state": "configuring", 00:20:25.120 "raid_level": "raid1", 00:20:25.120 "superblock": false, 00:20:25.120 "num_base_bdevs": 3, 00:20:25.120 "num_base_bdevs_discovered": 1, 00:20:25.120 "num_base_bdevs_operational": 3, 00:20:25.120 "base_bdevs_list": [ 00:20:25.120 { 00:20:25.120 "name": "BaseBdev1", 00:20:25.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.120 "is_configured": false, 00:20:25.120 "data_offset": 0, 00:20:25.120 "data_size": 0 00:20:25.120 }, 00:20:25.120 { 00:20:25.120 "name": null, 00:20:25.120 "uuid": "bb6ebb04-1b1d-4bc6-8513-233481088487", 00:20:25.120 "is_configured": false, 00:20:25.120 "data_offset": 0, 00:20:25.120 "data_size": 65536 00:20:25.120 }, 00:20:25.120 { 00:20:25.120 "name": "BaseBdev3", 00:20:25.120 "uuid": "147f05ac-53fc-49cb-997b-f0bce0be8c80", 00:20:25.120 "is_configured": true, 00:20:25.120 "data_offset": 0, 00:20:25.120 "data_size": 65536 00:20:25.120 } 00:20:25.120 ] 00:20:25.120 }' 00:20:25.120 11:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:25.120 11:02:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.690 11:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:25.690 11:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.948 11:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:25.948 11:02:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:26.207 [2024-07-25 11:02:33.083392] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:26.207 BaseBdev1 00:20:26.207 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:26.207 11:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:26.207 11:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:26.207 11:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:26.207 11:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:26.207 11:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:26.207 11:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:26.464 [ 00:20:26.464 { 00:20:26.464 "name": "BaseBdev1", 00:20:26.464 "aliases": [ 00:20:26.464 "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f" 00:20:26.464 ], 00:20:26.464 "product_name": "Malloc disk", 00:20:26.464 "block_size": 512, 00:20:26.464 "num_blocks": 65536, 00:20:26.464 "uuid": "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f", 00:20:26.464 "assigned_rate_limits": { 00:20:26.464 "rw_ios_per_sec": 0, 00:20:26.464 "rw_mbytes_per_sec": 0, 00:20:26.464 "r_mbytes_per_sec": 0, 00:20:26.464 "w_mbytes_per_sec": 0 00:20:26.464 }, 00:20:26.464 "claimed": true, 00:20:26.464 "claim_type": "exclusive_write", 00:20:26.464 "zoned": false, 00:20:26.464 "supported_io_types": { 00:20:26.464 "read": true, 00:20:26.464 "write": true, 00:20:26.464 "unmap": true, 00:20:26.464 "flush": true, 00:20:26.464 "reset": true, 00:20:26.464 "nvme_admin": false, 00:20:26.464 "nvme_io": false, 00:20:26.464 "nvme_io_md": false, 00:20:26.464 "write_zeroes": true, 00:20:26.464 "zcopy": true, 00:20:26.464 "get_zone_info": false, 00:20:26.464 "zone_management": false, 00:20:26.464 "zone_append": false, 00:20:26.464 "compare": false, 00:20:26.464 "compare_and_write": false, 00:20:26.464 "abort": true, 00:20:26.464 "seek_hole": false, 00:20:26.464 "seek_data": false, 00:20:26.464 "copy": true, 00:20:26.464 "nvme_iov_md": false 00:20:26.464 }, 00:20:26.464 "memory_domains": [ 00:20:26.464 { 00:20:26.464 "dma_device_id": "system", 00:20:26.464 "dma_device_type": 1 00:20:26.464 }, 00:20:26.464 { 00:20:26.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:26.464 "dma_device_type": 2 00:20:26.464 } 00:20:26.464 ], 00:20:26.464 "driver_specific": {} 00:20:26.464 } 00:20:26.464 ] 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.464 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.722 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.722 "name": "Existed_Raid", 00:20:26.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.722 "strip_size_kb": 0, 00:20:26.722 "state": "configuring", 00:20:26.722 "raid_level": "raid1", 00:20:26.722 "superblock": false, 00:20:26.722 "num_base_bdevs": 3, 00:20:26.722 "num_base_bdevs_discovered": 2, 00:20:26.722 "num_base_bdevs_operational": 3, 00:20:26.722 "base_bdevs_list": [ 00:20:26.722 { 00:20:26.722 "name": "BaseBdev1", 00:20:26.722 "uuid": "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f", 00:20:26.722 "is_configured": true, 00:20:26.722 "data_offset": 0, 00:20:26.722 "data_size": 65536 00:20:26.722 }, 00:20:26.722 { 00:20:26.722 "name": null, 00:20:26.722 "uuid": "bb6ebb04-1b1d-4bc6-8513-233481088487", 00:20:26.722 "is_configured": false, 00:20:26.722 "data_offset": 0, 00:20:26.722 "data_size": 65536 00:20:26.722 }, 00:20:26.722 { 00:20:26.722 "name": "BaseBdev3", 00:20:26.722 "uuid": "147f05ac-53fc-49cb-997b-f0bce0be8c80", 00:20:26.722 "is_configured": true, 00:20:26.722 "data_offset": 0, 00:20:26.722 "data_size": 65536 00:20:26.722 } 00:20:26.722 ] 00:20:26.722 }' 00:20:26.722 11:02:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.722 11:02:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:27.305 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.305 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:27.576 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:27.576 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:27.835 [2024-07-25 11:02:34.824203] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:27.835 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:27.835 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:27.835 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:27.835 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:27.835 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:27.835 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:27.835 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:27.835 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:27.835 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:27.835 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:27.835 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:27.835 11:02:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.094 11:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:28.094 "name": "Existed_Raid", 00:20:28.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.094 "strip_size_kb": 0, 00:20:28.094 "state": "configuring", 00:20:28.094 "raid_level": "raid1", 00:20:28.094 "superblock": false, 00:20:28.094 "num_base_bdevs": 3, 00:20:28.094 "num_base_bdevs_discovered": 1, 00:20:28.094 "num_base_bdevs_operational": 3, 00:20:28.094 "base_bdevs_list": [ 00:20:28.094 { 00:20:28.094 "name": "BaseBdev1", 00:20:28.094 "uuid": "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f", 00:20:28.094 "is_configured": true, 00:20:28.094 "data_offset": 0, 00:20:28.094 "data_size": 65536 00:20:28.094 }, 00:20:28.094 { 00:20:28.094 "name": null, 00:20:28.094 "uuid": "bb6ebb04-1b1d-4bc6-8513-233481088487", 00:20:28.094 "is_configured": false, 00:20:28.094 "data_offset": 0, 00:20:28.094 "data_size": 65536 00:20:28.094 }, 00:20:28.094 { 00:20:28.094 "name": null, 00:20:28.094 "uuid": "147f05ac-53fc-49cb-997b-f0bce0be8c80", 00:20:28.094 "is_configured": false, 00:20:28.094 "data_offset": 0, 00:20:28.094 "data_size": 65536 00:20:28.094 } 00:20:28.094 ] 00:20:28.094 }' 00:20:28.094 11:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:28.094 11:02:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:28.662 11:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.662 11:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:28.922 11:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:28.922 11:02:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:29.181 [2024-07-25 11:02:36.075700] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:29.181 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:29.181 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:29.181 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:29.181 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:29.181 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:29.181 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:29.181 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:29.181 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:29.181 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:29.181 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:29.181 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.181 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.440 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:29.440 "name": "Existed_Raid", 00:20:29.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.440 "strip_size_kb": 0, 00:20:29.440 "state": "configuring", 00:20:29.440 "raid_level": "raid1", 00:20:29.440 "superblock": false, 00:20:29.440 "num_base_bdevs": 3, 00:20:29.440 "num_base_bdevs_discovered": 2, 00:20:29.440 "num_base_bdevs_operational": 3, 00:20:29.440 "base_bdevs_list": [ 00:20:29.440 { 00:20:29.440 "name": "BaseBdev1", 00:20:29.440 "uuid": "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f", 00:20:29.441 "is_configured": true, 00:20:29.441 "data_offset": 0, 00:20:29.441 "data_size": 65536 00:20:29.441 }, 00:20:29.441 { 00:20:29.441 "name": null, 00:20:29.441 "uuid": "bb6ebb04-1b1d-4bc6-8513-233481088487", 00:20:29.441 "is_configured": false, 00:20:29.441 "data_offset": 0, 00:20:29.441 "data_size": 65536 00:20:29.441 }, 00:20:29.441 { 00:20:29.441 "name": "BaseBdev3", 00:20:29.441 "uuid": "147f05ac-53fc-49cb-997b-f0bce0be8c80", 00:20:29.441 "is_configured": true, 00:20:29.441 "data_offset": 0, 00:20:29.441 "data_size": 65536 00:20:29.441 } 00:20:29.441 ] 00:20:29.441 }' 00:20:29.441 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:29.441 11:02:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:30.009 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.009 11:02:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:30.009 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:30.009 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:30.267 [2024-07-25 11:02:37.331110] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:30.526 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:30.526 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:30.526 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:30.526 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:30.526 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:30.526 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:30.526 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.526 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.526 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.526 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.526 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.526 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.785 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.785 "name": "Existed_Raid", 00:20:30.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.785 "strip_size_kb": 0, 00:20:30.785 "state": "configuring", 00:20:30.785 "raid_level": "raid1", 00:20:30.785 "superblock": false, 00:20:30.785 "num_base_bdevs": 3, 00:20:30.785 "num_base_bdevs_discovered": 1, 00:20:30.785 "num_base_bdevs_operational": 3, 00:20:30.785 "base_bdevs_list": [ 00:20:30.785 { 00:20:30.785 "name": null, 00:20:30.785 "uuid": "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f", 00:20:30.785 "is_configured": false, 00:20:30.785 "data_offset": 0, 00:20:30.785 "data_size": 65536 00:20:30.785 }, 00:20:30.785 { 00:20:30.785 "name": null, 00:20:30.785 "uuid": "bb6ebb04-1b1d-4bc6-8513-233481088487", 00:20:30.785 "is_configured": false, 00:20:30.785 "data_offset": 0, 00:20:30.785 "data_size": 65536 00:20:30.785 }, 00:20:30.785 { 00:20:30.785 "name": "BaseBdev3", 00:20:30.785 "uuid": "147f05ac-53fc-49cb-997b-f0bce0be8c80", 00:20:30.785 "is_configured": true, 00:20:30.785 "data_offset": 0, 00:20:30.785 "data_size": 65536 00:20:30.785 } 00:20:30.785 ] 00:20:30.785 }' 00:20:30.785 11:02:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.785 11:02:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:31.353 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.353 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:31.612 [2024-07-25 11:02:38.705571] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.612 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.871 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:31.871 "name": "Existed_Raid", 00:20:31.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.871 "strip_size_kb": 0, 00:20:31.871 "state": "configuring", 00:20:31.871 "raid_level": "raid1", 00:20:31.871 "superblock": false, 00:20:31.871 "num_base_bdevs": 3, 00:20:31.871 "num_base_bdevs_discovered": 2, 00:20:31.871 "num_base_bdevs_operational": 3, 00:20:31.871 "base_bdevs_list": [ 00:20:31.871 { 00:20:31.871 "name": null, 00:20:31.871 "uuid": "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f", 00:20:31.871 "is_configured": false, 00:20:31.871 "data_offset": 0, 00:20:31.871 "data_size": 65536 00:20:31.871 }, 00:20:31.871 { 00:20:31.871 "name": "BaseBdev2", 00:20:31.871 "uuid": "bb6ebb04-1b1d-4bc6-8513-233481088487", 00:20:31.871 "is_configured": true, 00:20:31.871 "data_offset": 0, 00:20:31.871 "data_size": 65536 00:20:31.871 }, 00:20:31.871 { 00:20:31.871 "name": "BaseBdev3", 00:20:31.871 "uuid": "147f05ac-53fc-49cb-997b-f0bce0be8c80", 00:20:31.871 "is_configured": true, 00:20:31.871 "data_offset": 0, 00:20:31.871 "data_size": 65536 00:20:31.871 } 00:20:31.871 ] 00:20:31.871 }' 00:20:31.871 11:02:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:31.871 11:02:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.440 11:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:32.440 11:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.700 11:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:32.700 11:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.700 11:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:32.958 11:02:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f 00:20:33.218 [2024-07-25 11:02:40.251615] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:33.218 [2024-07-25 11:02:40.251670] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:33.218 [2024-07-25 11:02:40.251683] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:33.218 [2024-07-25 11:02:40.252015] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:20:33.218 [2024-07-25 11:02:40.252242] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:33.218 [2024-07-25 11:02:40.252262] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:33.218 [2024-07-25 11:02:40.252562] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.218 NewBaseBdev 00:20:33.218 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:33.218 11:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:20:33.218 11:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:33.218 11:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:20:33.218 11:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:33.218 11:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:33.218 11:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:33.477 11:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:33.737 [ 00:20:33.737 { 00:20:33.737 "name": "NewBaseBdev", 00:20:33.737 "aliases": [ 00:20:33.737 "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f" 00:20:33.737 ], 00:20:33.737 "product_name": "Malloc disk", 00:20:33.737 "block_size": 512, 00:20:33.737 "num_blocks": 65536, 00:20:33.737 "uuid": "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f", 00:20:33.737 "assigned_rate_limits": { 00:20:33.737 "rw_ios_per_sec": 0, 00:20:33.737 "rw_mbytes_per_sec": 0, 00:20:33.737 "r_mbytes_per_sec": 0, 00:20:33.737 "w_mbytes_per_sec": 0 00:20:33.737 }, 00:20:33.737 "claimed": true, 00:20:33.737 "claim_type": "exclusive_write", 00:20:33.737 "zoned": false, 00:20:33.737 "supported_io_types": { 00:20:33.737 "read": true, 00:20:33.737 "write": true, 00:20:33.737 "unmap": true, 00:20:33.737 "flush": true, 00:20:33.737 "reset": true, 00:20:33.737 "nvme_admin": false, 00:20:33.737 "nvme_io": false, 00:20:33.737 "nvme_io_md": false, 00:20:33.737 "write_zeroes": true, 00:20:33.737 "zcopy": true, 00:20:33.737 "get_zone_info": false, 00:20:33.737 "zone_management": false, 00:20:33.737 "zone_append": false, 00:20:33.737 "compare": false, 00:20:33.737 "compare_and_write": false, 00:20:33.737 "abort": true, 00:20:33.737 "seek_hole": false, 00:20:33.737 "seek_data": false, 00:20:33.737 "copy": true, 00:20:33.737 "nvme_iov_md": false 00:20:33.737 }, 00:20:33.737 "memory_domains": [ 00:20:33.737 { 00:20:33.737 "dma_device_id": "system", 00:20:33.737 "dma_device_type": 1 00:20:33.737 }, 00:20:33.737 { 00:20:33.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.737 "dma_device_type": 2 00:20:33.737 } 00:20:33.737 ], 00:20:33.737 "driver_specific": {} 00:20:33.737 } 00:20:33.737 ] 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.737 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.997 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.997 "name": "Existed_Raid", 00:20:33.997 "uuid": "dac5f718-dfbd-480e-b283-a2f983fa27a8", 00:20:33.997 "strip_size_kb": 0, 00:20:33.997 "state": "online", 00:20:33.997 "raid_level": "raid1", 00:20:33.997 "superblock": false, 00:20:33.997 "num_base_bdevs": 3, 00:20:33.997 "num_base_bdevs_discovered": 3, 00:20:33.997 "num_base_bdevs_operational": 3, 00:20:33.997 "base_bdevs_list": [ 00:20:33.997 { 00:20:33.997 "name": "NewBaseBdev", 00:20:33.997 "uuid": "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f", 00:20:33.997 "is_configured": true, 00:20:33.997 "data_offset": 0, 00:20:33.997 "data_size": 65536 00:20:33.997 }, 00:20:33.997 { 00:20:33.997 "name": "BaseBdev2", 00:20:33.997 "uuid": "bb6ebb04-1b1d-4bc6-8513-233481088487", 00:20:33.997 "is_configured": true, 00:20:33.997 "data_offset": 0, 00:20:33.997 "data_size": 65536 00:20:33.997 }, 00:20:33.997 { 00:20:33.997 "name": "BaseBdev3", 00:20:33.997 "uuid": "147f05ac-53fc-49cb-997b-f0bce0be8c80", 00:20:33.997 "is_configured": true, 00:20:33.997 "data_offset": 0, 00:20:33.997 "data_size": 65536 00:20:33.997 } 00:20:33.997 ] 00:20:33.997 }' 00:20:33.997 11:02:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.997 11:02:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.565 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:34.565 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:34.565 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:34.565 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:34.565 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:34.565 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:34.565 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:34.565 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:34.825 [2024-07-25 11:02:41.712051] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.825 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:34.825 "name": "Existed_Raid", 00:20:34.825 "aliases": [ 00:20:34.825 "dac5f718-dfbd-480e-b283-a2f983fa27a8" 00:20:34.825 ], 00:20:34.825 "product_name": "Raid Volume", 00:20:34.825 "block_size": 512, 00:20:34.825 "num_blocks": 65536, 00:20:34.825 "uuid": "dac5f718-dfbd-480e-b283-a2f983fa27a8", 00:20:34.825 "assigned_rate_limits": { 00:20:34.825 "rw_ios_per_sec": 0, 00:20:34.825 "rw_mbytes_per_sec": 0, 00:20:34.825 "r_mbytes_per_sec": 0, 00:20:34.825 "w_mbytes_per_sec": 0 00:20:34.825 }, 00:20:34.825 "claimed": false, 00:20:34.825 "zoned": false, 00:20:34.825 "supported_io_types": { 00:20:34.825 "read": true, 00:20:34.825 "write": true, 00:20:34.825 "unmap": false, 00:20:34.825 "flush": false, 00:20:34.825 "reset": true, 00:20:34.825 "nvme_admin": false, 00:20:34.825 "nvme_io": false, 00:20:34.825 "nvme_io_md": false, 00:20:34.825 "write_zeroes": true, 00:20:34.825 "zcopy": false, 00:20:34.825 "get_zone_info": false, 00:20:34.825 "zone_management": false, 00:20:34.825 "zone_append": false, 00:20:34.825 "compare": false, 00:20:34.825 "compare_and_write": false, 00:20:34.825 "abort": false, 00:20:34.825 "seek_hole": false, 00:20:34.825 "seek_data": false, 00:20:34.825 "copy": false, 00:20:34.825 "nvme_iov_md": false 00:20:34.825 }, 00:20:34.825 "memory_domains": [ 00:20:34.825 { 00:20:34.825 "dma_device_id": "system", 00:20:34.825 "dma_device_type": 1 00:20:34.825 }, 00:20:34.825 { 00:20:34.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.825 "dma_device_type": 2 00:20:34.825 }, 00:20:34.825 { 00:20:34.825 "dma_device_id": "system", 00:20:34.825 "dma_device_type": 1 00:20:34.825 }, 00:20:34.825 { 00:20:34.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.825 "dma_device_type": 2 00:20:34.825 }, 00:20:34.825 { 00:20:34.825 "dma_device_id": "system", 00:20:34.825 "dma_device_type": 1 00:20:34.825 }, 00:20:34.825 { 00:20:34.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.825 "dma_device_type": 2 00:20:34.825 } 00:20:34.825 ], 00:20:34.825 "driver_specific": { 00:20:34.825 "raid": { 00:20:34.825 "uuid": "dac5f718-dfbd-480e-b283-a2f983fa27a8", 00:20:34.825 "strip_size_kb": 0, 00:20:34.825 "state": "online", 00:20:34.825 "raid_level": "raid1", 00:20:34.825 "superblock": false, 00:20:34.825 "num_base_bdevs": 3, 00:20:34.825 "num_base_bdevs_discovered": 3, 00:20:34.825 "num_base_bdevs_operational": 3, 00:20:34.825 "base_bdevs_list": [ 00:20:34.825 { 00:20:34.825 "name": "NewBaseBdev", 00:20:34.825 "uuid": "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f", 00:20:34.825 "is_configured": true, 00:20:34.825 "data_offset": 0, 00:20:34.825 "data_size": 65536 00:20:34.825 }, 00:20:34.825 { 00:20:34.825 "name": "BaseBdev2", 00:20:34.825 "uuid": "bb6ebb04-1b1d-4bc6-8513-233481088487", 00:20:34.825 "is_configured": true, 00:20:34.825 "data_offset": 0, 00:20:34.825 "data_size": 65536 00:20:34.825 }, 00:20:34.825 { 00:20:34.825 "name": "BaseBdev3", 00:20:34.825 "uuid": "147f05ac-53fc-49cb-997b-f0bce0be8c80", 00:20:34.825 "is_configured": true, 00:20:34.825 "data_offset": 0, 00:20:34.825 "data_size": 65536 00:20:34.826 } 00:20:34.826 ] 00:20:34.826 } 00:20:34.826 } 00:20:34.826 }' 00:20:34.826 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:34.826 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:34.826 BaseBdev2 00:20:34.826 BaseBdev3' 00:20:34.826 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:34.826 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:34.826 11:02:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:35.085 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:35.085 "name": "NewBaseBdev", 00:20:35.085 "aliases": [ 00:20:35.085 "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f" 00:20:35.085 ], 00:20:35.085 "product_name": "Malloc disk", 00:20:35.085 "block_size": 512, 00:20:35.085 "num_blocks": 65536, 00:20:35.085 "uuid": "5c7b15f3-42bf-4b07-a7b9-fe48768b8a0f", 00:20:35.085 "assigned_rate_limits": { 00:20:35.085 "rw_ios_per_sec": 0, 00:20:35.085 "rw_mbytes_per_sec": 0, 00:20:35.085 "r_mbytes_per_sec": 0, 00:20:35.085 "w_mbytes_per_sec": 0 00:20:35.085 }, 00:20:35.085 "claimed": true, 00:20:35.085 "claim_type": "exclusive_write", 00:20:35.085 "zoned": false, 00:20:35.085 "supported_io_types": { 00:20:35.085 "read": true, 00:20:35.085 "write": true, 00:20:35.085 "unmap": true, 00:20:35.085 "flush": true, 00:20:35.085 "reset": true, 00:20:35.085 "nvme_admin": false, 00:20:35.085 "nvme_io": false, 00:20:35.085 "nvme_io_md": false, 00:20:35.085 "write_zeroes": true, 00:20:35.085 "zcopy": true, 00:20:35.085 "get_zone_info": false, 00:20:35.085 "zone_management": false, 00:20:35.085 "zone_append": false, 00:20:35.085 "compare": false, 00:20:35.085 "compare_and_write": false, 00:20:35.085 "abort": true, 00:20:35.085 "seek_hole": false, 00:20:35.085 "seek_data": false, 00:20:35.085 "copy": true, 00:20:35.085 "nvme_iov_md": false 00:20:35.085 }, 00:20:35.085 "memory_domains": [ 00:20:35.085 { 00:20:35.085 "dma_device_id": "system", 00:20:35.085 "dma_device_type": 1 00:20:35.085 }, 00:20:35.085 { 00:20:35.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.085 "dma_device_type": 2 00:20:35.085 } 00:20:35.085 ], 00:20:35.085 "driver_specific": {} 00:20:35.085 }' 00:20:35.085 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.085 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.085 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:35.085 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.085 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.085 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:35.085 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.345 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.345 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:35.345 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.345 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.345 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:35.345 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:35.345 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:35.345 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:35.606 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:35.606 "name": "BaseBdev2", 00:20:35.606 "aliases": [ 00:20:35.606 "bb6ebb04-1b1d-4bc6-8513-233481088487" 00:20:35.606 ], 00:20:35.606 "product_name": "Malloc disk", 00:20:35.606 "block_size": 512, 00:20:35.606 "num_blocks": 65536, 00:20:35.606 "uuid": "bb6ebb04-1b1d-4bc6-8513-233481088487", 00:20:35.606 "assigned_rate_limits": { 00:20:35.606 "rw_ios_per_sec": 0, 00:20:35.606 "rw_mbytes_per_sec": 0, 00:20:35.606 "r_mbytes_per_sec": 0, 00:20:35.606 "w_mbytes_per_sec": 0 00:20:35.606 }, 00:20:35.606 "claimed": true, 00:20:35.606 "claim_type": "exclusive_write", 00:20:35.606 "zoned": false, 00:20:35.606 "supported_io_types": { 00:20:35.606 "read": true, 00:20:35.606 "write": true, 00:20:35.606 "unmap": true, 00:20:35.606 "flush": true, 00:20:35.606 "reset": true, 00:20:35.606 "nvme_admin": false, 00:20:35.606 "nvme_io": false, 00:20:35.606 "nvme_io_md": false, 00:20:35.606 "write_zeroes": true, 00:20:35.606 "zcopy": true, 00:20:35.606 "get_zone_info": false, 00:20:35.606 "zone_management": false, 00:20:35.606 "zone_append": false, 00:20:35.606 "compare": false, 00:20:35.606 "compare_and_write": false, 00:20:35.606 "abort": true, 00:20:35.606 "seek_hole": false, 00:20:35.606 "seek_data": false, 00:20:35.606 "copy": true, 00:20:35.606 "nvme_iov_md": false 00:20:35.606 }, 00:20:35.606 "memory_domains": [ 00:20:35.606 { 00:20:35.606 "dma_device_id": "system", 00:20:35.606 "dma_device_type": 1 00:20:35.606 }, 00:20:35.606 { 00:20:35.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.606 "dma_device_type": 2 00:20:35.606 } 00:20:35.606 ], 00:20:35.606 "driver_specific": {} 00:20:35.606 }' 00:20:35.606 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.606 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.606 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:35.606 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.606 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.866 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:35.866 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.866 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.866 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:35.866 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.866 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.866 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:35.867 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:35.867 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:35.867 11:02:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:36.126 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:36.126 "name": "BaseBdev3", 00:20:36.126 "aliases": [ 00:20:36.126 "147f05ac-53fc-49cb-997b-f0bce0be8c80" 00:20:36.126 ], 00:20:36.126 "product_name": "Malloc disk", 00:20:36.126 "block_size": 512, 00:20:36.126 "num_blocks": 65536, 00:20:36.126 "uuid": "147f05ac-53fc-49cb-997b-f0bce0be8c80", 00:20:36.126 "assigned_rate_limits": { 00:20:36.126 "rw_ios_per_sec": 0, 00:20:36.126 "rw_mbytes_per_sec": 0, 00:20:36.126 "r_mbytes_per_sec": 0, 00:20:36.126 "w_mbytes_per_sec": 0 00:20:36.126 }, 00:20:36.126 "claimed": true, 00:20:36.126 "claim_type": "exclusive_write", 00:20:36.126 "zoned": false, 00:20:36.126 "supported_io_types": { 00:20:36.126 "read": true, 00:20:36.126 "write": true, 00:20:36.126 "unmap": true, 00:20:36.126 "flush": true, 00:20:36.126 "reset": true, 00:20:36.126 "nvme_admin": false, 00:20:36.126 "nvme_io": false, 00:20:36.126 "nvme_io_md": false, 00:20:36.126 "write_zeroes": true, 00:20:36.126 "zcopy": true, 00:20:36.126 "get_zone_info": false, 00:20:36.126 "zone_management": false, 00:20:36.126 "zone_append": false, 00:20:36.126 "compare": false, 00:20:36.126 "compare_and_write": false, 00:20:36.126 "abort": true, 00:20:36.126 "seek_hole": false, 00:20:36.126 "seek_data": false, 00:20:36.126 "copy": true, 00:20:36.126 "nvme_iov_md": false 00:20:36.126 }, 00:20:36.126 "memory_domains": [ 00:20:36.126 { 00:20:36.126 "dma_device_id": "system", 00:20:36.126 "dma_device_type": 1 00:20:36.126 }, 00:20:36.126 { 00:20:36.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.126 "dma_device_type": 2 00:20:36.126 } 00:20:36.126 ], 00:20:36.126 "driver_specific": {} 00:20:36.126 }' 00:20:36.126 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.126 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.126 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:36.126 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.385 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.385 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:36.385 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.385 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.385 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.385 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.385 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.385 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.385 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:36.644 [2024-07-25 11:02:43.668928] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:36.644 [2024-07-25 11:02:43.668968] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:36.644 [2024-07-25 11:02:43.669060] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:36.644 [2024-07-25 11:02:43.669410] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:36.644 [2024-07-25 11:02:43.669428] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:36.644 11:02:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3615526 00:20:36.645 11:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3615526 ']' 00:20:36.645 11:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3615526 00:20:36.645 11:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:20:36.645 11:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.645 11:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3615526 00:20:36.645 11:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:36.645 11:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:36.645 11:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3615526' 00:20:36.645 killing process with pid 3615526 00:20:36.645 11:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3615526 00:20:36.645 [2024-07-25 11:02:43.741472] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:36.645 11:02:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3615526 00:20:37.213 [2024-07-25 11:02:44.059345] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:39.120 00:20:39.120 real 0m29.652s 00:20:39.120 user 0m51.734s 00:20:39.120 sys 0m5.217s 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.120 ************************************ 00:20:39.120 END TEST raid_state_function_test 00:20:39.120 ************************************ 00:20:39.120 11:02:45 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:20:39.120 11:02:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:39.120 11:02:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:39.120 11:02:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:39.120 ************************************ 00:20:39.120 START TEST raid_state_function_test_sb 00:20:39.120 ************************************ 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:39.120 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3621137 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3621137' 00:20:39.121 Process raid pid: 3621137 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3621137 /var/tmp/spdk-raid.sock 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3621137 ']' 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:39.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:39.121 11:02:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:39.121 [2024-07-25 11:02:45.992204] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:39.121 [2024-07-25 11:02:45.992327] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:01.0 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:01.1 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:01.2 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:01.3 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:01.4 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:01.5 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:01.6 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:01.7 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:02.0 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:02.1 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:02.2 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:02.3 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:02.4 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:02.5 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:02.6 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3d:02.7 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:01.0 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:01.1 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:01.2 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:01.3 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:01.4 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:01.5 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:01.6 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:01.7 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:02.0 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:02.1 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:02.2 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:02.3 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:02.4 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:02.5 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:02.6 cannot be used 00:20:39.121 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:20:39.121 EAL: Requested device 0000:3f:02.7 cannot be used 00:20:39.121 [2024-07-25 11:02:46.219368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.690 [2024-07-25 11:02:46.505037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.950 [2024-07-25 11:02:46.855383] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:39.950 [2024-07-25 11:02:46.855419] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:39.950 11:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:39.950 11:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:20:39.950 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:40.210 [2024-07-25 11:02:47.251683] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:40.210 [2024-07-25 11:02:47.251745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:40.210 [2024-07-25 11:02:47.251760] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:40.210 [2024-07-25 11:02:47.251776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:40.210 [2024-07-25 11:02:47.251787] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:40.210 [2024-07-25 11:02:47.251803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:40.210 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:40.210 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:40.210 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:40.210 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:40.210 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:40.210 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:40.210 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.210 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.210 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.210 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.210 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.210 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:40.469 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.469 "name": "Existed_Raid", 00:20:40.469 "uuid": "d1401698-f6c1-49c0-8399-afb07a550f31", 00:20:40.469 "strip_size_kb": 0, 00:20:40.469 "state": "configuring", 00:20:40.469 "raid_level": "raid1", 00:20:40.469 "superblock": true, 00:20:40.469 "num_base_bdevs": 3, 00:20:40.469 "num_base_bdevs_discovered": 0, 00:20:40.469 "num_base_bdevs_operational": 3, 00:20:40.469 "base_bdevs_list": [ 00:20:40.469 { 00:20:40.469 "name": "BaseBdev1", 00:20:40.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.469 "is_configured": false, 00:20:40.469 "data_offset": 0, 00:20:40.469 "data_size": 0 00:20:40.469 }, 00:20:40.469 { 00:20:40.469 "name": "BaseBdev2", 00:20:40.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.469 "is_configured": false, 00:20:40.469 "data_offset": 0, 00:20:40.469 "data_size": 0 00:20:40.469 }, 00:20:40.469 { 00:20:40.469 "name": "BaseBdev3", 00:20:40.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.469 "is_configured": false, 00:20:40.469 "data_offset": 0, 00:20:40.469 "data_size": 0 00:20:40.469 } 00:20:40.469 ] 00:20:40.469 }' 00:20:40.469 11:02:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.469 11:02:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:41.102 11:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:41.360 [2024-07-25 11:02:48.274280] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:41.360 [2024-07-25 11:02:48.274323] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:41.360 11:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:41.618 [2024-07-25 11:02:48.498934] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:41.618 [2024-07-25 11:02:48.498980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:41.618 [2024-07-25 11:02:48.498993] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:41.618 [2024-07-25 11:02:48.499012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:41.618 [2024-07-25 11:02:48.499024] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:41.618 [2024-07-25 11:02:48.499039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:41.618 11:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:41.875 [2024-07-25 11:02:48.786003] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:41.875 BaseBdev1 00:20:41.875 11:02:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:41.875 11:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:41.875 11:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:41.875 11:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:41.875 11:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:41.875 11:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:41.875 11:02:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:42.134 11:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:42.134 [ 00:20:42.134 { 00:20:42.134 "name": "BaseBdev1", 00:20:42.134 "aliases": [ 00:20:42.134 "c99838e6-61b9-4497-bd9f-cfdd56bb7599" 00:20:42.134 ], 00:20:42.134 "product_name": "Malloc disk", 00:20:42.134 "block_size": 512, 00:20:42.134 "num_blocks": 65536, 00:20:42.134 "uuid": "c99838e6-61b9-4497-bd9f-cfdd56bb7599", 00:20:42.134 "assigned_rate_limits": { 00:20:42.134 "rw_ios_per_sec": 0, 00:20:42.134 "rw_mbytes_per_sec": 0, 00:20:42.134 "r_mbytes_per_sec": 0, 00:20:42.134 "w_mbytes_per_sec": 0 00:20:42.134 }, 00:20:42.134 "claimed": true, 00:20:42.134 "claim_type": "exclusive_write", 00:20:42.134 "zoned": false, 00:20:42.134 "supported_io_types": { 00:20:42.134 "read": true, 00:20:42.134 "write": true, 00:20:42.134 "unmap": true, 00:20:42.134 "flush": true, 00:20:42.134 "reset": true, 00:20:42.134 "nvme_admin": false, 00:20:42.134 "nvme_io": false, 00:20:42.134 "nvme_io_md": false, 00:20:42.134 "write_zeroes": true, 00:20:42.134 "zcopy": true, 00:20:42.134 "get_zone_info": false, 00:20:42.134 "zone_management": false, 00:20:42.134 "zone_append": false, 00:20:42.134 "compare": false, 00:20:42.134 "compare_and_write": false, 00:20:42.134 "abort": true, 00:20:42.134 "seek_hole": false, 00:20:42.134 "seek_data": false, 00:20:42.134 "copy": true, 00:20:42.134 "nvme_iov_md": false 00:20:42.134 }, 00:20:42.134 "memory_domains": [ 00:20:42.134 { 00:20:42.134 "dma_device_id": "system", 00:20:42.134 "dma_device_type": 1 00:20:42.134 }, 00:20:42.134 { 00:20:42.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:42.134 "dma_device_type": 2 00:20:42.134 } 00:20:42.134 ], 00:20:42.134 "driver_specific": {} 00:20:42.134 } 00:20:42.134 ] 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:42.393 "name": "Existed_Raid", 00:20:42.393 "uuid": "19518353-9b9b-48fa-bc54-73d9bc6e6975", 00:20:42.393 "strip_size_kb": 0, 00:20:42.393 "state": "configuring", 00:20:42.393 "raid_level": "raid1", 00:20:42.393 "superblock": true, 00:20:42.393 "num_base_bdevs": 3, 00:20:42.393 "num_base_bdevs_discovered": 1, 00:20:42.393 "num_base_bdevs_operational": 3, 00:20:42.393 "base_bdevs_list": [ 00:20:42.393 { 00:20:42.393 "name": "BaseBdev1", 00:20:42.393 "uuid": "c99838e6-61b9-4497-bd9f-cfdd56bb7599", 00:20:42.393 "is_configured": true, 00:20:42.393 "data_offset": 2048, 00:20:42.393 "data_size": 63488 00:20:42.393 }, 00:20:42.393 { 00:20:42.393 "name": "BaseBdev2", 00:20:42.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.393 "is_configured": false, 00:20:42.393 "data_offset": 0, 00:20:42.393 "data_size": 0 00:20:42.393 }, 00:20:42.393 { 00:20:42.393 "name": "BaseBdev3", 00:20:42.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:42.393 "is_configured": false, 00:20:42.393 "data_offset": 0, 00:20:42.393 "data_size": 0 00:20:42.393 } 00:20:42.393 ] 00:20:42.393 }' 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:42.393 11:02:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.960 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:43.218 [2024-07-25 11:02:50.250017] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:43.218 [2024-07-25 11:02:50.250073] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:43.218 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:43.478 [2024-07-25 11:02:50.478727] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:43.478 [2024-07-25 11:02:50.481037] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:43.478 [2024-07-25 11:02:50.481082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:43.478 [2024-07-25 11:02:50.481096] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:43.478 [2024-07-25 11:02:50.481117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.478 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.737 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.737 "name": "Existed_Raid", 00:20:43.737 "uuid": "a4ec98d3-ac2f-4216-8dfa-6a263bcc4418", 00:20:43.737 "strip_size_kb": 0, 00:20:43.737 "state": "configuring", 00:20:43.737 "raid_level": "raid1", 00:20:43.737 "superblock": true, 00:20:43.737 "num_base_bdevs": 3, 00:20:43.737 "num_base_bdevs_discovered": 1, 00:20:43.737 "num_base_bdevs_operational": 3, 00:20:43.737 "base_bdevs_list": [ 00:20:43.737 { 00:20:43.737 "name": "BaseBdev1", 00:20:43.737 "uuid": "c99838e6-61b9-4497-bd9f-cfdd56bb7599", 00:20:43.737 "is_configured": true, 00:20:43.737 "data_offset": 2048, 00:20:43.737 "data_size": 63488 00:20:43.737 }, 00:20:43.737 { 00:20:43.737 "name": "BaseBdev2", 00:20:43.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.737 "is_configured": false, 00:20:43.737 "data_offset": 0, 00:20:43.737 "data_size": 0 00:20:43.737 }, 00:20:43.737 { 00:20:43.737 "name": "BaseBdev3", 00:20:43.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.737 "is_configured": false, 00:20:43.737 "data_offset": 0, 00:20:43.737 "data_size": 0 00:20:43.737 } 00:20:43.737 ] 00:20:43.737 }' 00:20:43.737 11:02:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.737 11:02:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:44.304 11:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:44.562 [2024-07-25 11:02:51.543075] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:44.562 BaseBdev2 00:20:44.562 11:02:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:44.562 11:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:44.562 11:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:44.562 11:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:44.562 11:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:44.562 11:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:44.562 11:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.820 11:02:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:45.078 [ 00:20:45.078 { 00:20:45.078 "name": "BaseBdev2", 00:20:45.078 "aliases": [ 00:20:45.078 "82343eec-320b-45e6-b564-b85dfb87f069" 00:20:45.078 ], 00:20:45.078 "product_name": "Malloc disk", 00:20:45.078 "block_size": 512, 00:20:45.078 "num_blocks": 65536, 00:20:45.078 "uuid": "82343eec-320b-45e6-b564-b85dfb87f069", 00:20:45.078 "assigned_rate_limits": { 00:20:45.078 "rw_ios_per_sec": 0, 00:20:45.078 "rw_mbytes_per_sec": 0, 00:20:45.078 "r_mbytes_per_sec": 0, 00:20:45.078 "w_mbytes_per_sec": 0 00:20:45.078 }, 00:20:45.078 "claimed": true, 00:20:45.078 "claim_type": "exclusive_write", 00:20:45.078 "zoned": false, 00:20:45.078 "supported_io_types": { 00:20:45.078 "read": true, 00:20:45.078 "write": true, 00:20:45.078 "unmap": true, 00:20:45.078 "flush": true, 00:20:45.078 "reset": true, 00:20:45.078 "nvme_admin": false, 00:20:45.078 "nvme_io": false, 00:20:45.078 "nvme_io_md": false, 00:20:45.078 "write_zeroes": true, 00:20:45.078 "zcopy": true, 00:20:45.078 "get_zone_info": false, 00:20:45.078 "zone_management": false, 00:20:45.078 "zone_append": false, 00:20:45.078 "compare": false, 00:20:45.078 "compare_and_write": false, 00:20:45.078 "abort": true, 00:20:45.078 "seek_hole": false, 00:20:45.078 "seek_data": false, 00:20:45.078 "copy": true, 00:20:45.078 "nvme_iov_md": false 00:20:45.078 }, 00:20:45.078 "memory_domains": [ 00:20:45.078 { 00:20:45.078 "dma_device_id": "system", 00:20:45.078 "dma_device_type": 1 00:20:45.078 }, 00:20:45.078 { 00:20:45.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.078 "dma_device_type": 2 00:20:45.078 } 00:20:45.078 ], 00:20:45.078 "driver_specific": {} 00:20:45.078 } 00:20:45.078 ] 00:20:45.078 11:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:45.078 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:45.078 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:45.078 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:45.078 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:45.078 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:45.078 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:45.078 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:45.078 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:45.078 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:45.079 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:45.079 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:45.079 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:45.079 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:45.079 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.337 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.337 "name": "Existed_Raid", 00:20:45.337 "uuid": "a4ec98d3-ac2f-4216-8dfa-6a263bcc4418", 00:20:45.337 "strip_size_kb": 0, 00:20:45.337 "state": "configuring", 00:20:45.337 "raid_level": "raid1", 00:20:45.337 "superblock": true, 00:20:45.337 "num_base_bdevs": 3, 00:20:45.337 "num_base_bdevs_discovered": 2, 00:20:45.337 "num_base_bdevs_operational": 3, 00:20:45.337 "base_bdevs_list": [ 00:20:45.337 { 00:20:45.337 "name": "BaseBdev1", 00:20:45.337 "uuid": "c99838e6-61b9-4497-bd9f-cfdd56bb7599", 00:20:45.337 "is_configured": true, 00:20:45.337 "data_offset": 2048, 00:20:45.337 "data_size": 63488 00:20:45.337 }, 00:20:45.337 { 00:20:45.337 "name": "BaseBdev2", 00:20:45.337 "uuid": "82343eec-320b-45e6-b564-b85dfb87f069", 00:20:45.337 "is_configured": true, 00:20:45.337 "data_offset": 2048, 00:20:45.337 "data_size": 63488 00:20:45.337 }, 00:20:45.337 { 00:20:45.337 "name": "BaseBdev3", 00:20:45.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.337 "is_configured": false, 00:20:45.337 "data_offset": 0, 00:20:45.337 "data_size": 0 00:20:45.337 } 00:20:45.337 ] 00:20:45.337 }' 00:20:45.337 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.337 11:02:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.903 11:02:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:46.161 [2024-07-25 11:02:53.084168] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:46.161 [2024-07-25 11:02:53.084437] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:46.161 [2024-07-25 11:02:53.084465] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:46.161 [2024-07-25 11:02:53.084781] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:20:46.161 [2024-07-25 11:02:53.085017] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:46.161 [2024-07-25 11:02:53.085033] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:46.161 [2024-07-25 11:02:53.085227] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:46.161 BaseBdev3 00:20:46.161 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:46.161 11:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:46.161 11:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:46.161 11:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:46.161 11:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:46.161 11:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:46.161 11:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:46.418 11:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:46.675 [ 00:20:46.675 { 00:20:46.675 "name": "BaseBdev3", 00:20:46.675 "aliases": [ 00:20:46.675 "faf4bd55-83e2-49d8-9789-a62c4c3bdd0f" 00:20:46.675 ], 00:20:46.675 "product_name": "Malloc disk", 00:20:46.675 "block_size": 512, 00:20:46.675 "num_blocks": 65536, 00:20:46.675 "uuid": "faf4bd55-83e2-49d8-9789-a62c4c3bdd0f", 00:20:46.675 "assigned_rate_limits": { 00:20:46.675 "rw_ios_per_sec": 0, 00:20:46.675 "rw_mbytes_per_sec": 0, 00:20:46.675 "r_mbytes_per_sec": 0, 00:20:46.675 "w_mbytes_per_sec": 0 00:20:46.675 }, 00:20:46.675 "claimed": true, 00:20:46.675 "claim_type": "exclusive_write", 00:20:46.675 "zoned": false, 00:20:46.675 "supported_io_types": { 00:20:46.675 "read": true, 00:20:46.675 "write": true, 00:20:46.675 "unmap": true, 00:20:46.675 "flush": true, 00:20:46.675 "reset": true, 00:20:46.675 "nvme_admin": false, 00:20:46.675 "nvme_io": false, 00:20:46.675 "nvme_io_md": false, 00:20:46.675 "write_zeroes": true, 00:20:46.675 "zcopy": true, 00:20:46.675 "get_zone_info": false, 00:20:46.675 "zone_management": false, 00:20:46.675 "zone_append": false, 00:20:46.675 "compare": false, 00:20:46.675 "compare_and_write": false, 00:20:46.675 "abort": true, 00:20:46.675 "seek_hole": false, 00:20:46.675 "seek_data": false, 00:20:46.675 "copy": true, 00:20:46.675 "nvme_iov_md": false 00:20:46.675 }, 00:20:46.675 "memory_domains": [ 00:20:46.675 { 00:20:46.675 "dma_device_id": "system", 00:20:46.675 "dma_device_type": 1 00:20:46.675 }, 00:20:46.675 { 00:20:46.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:46.675 "dma_device_type": 2 00:20:46.675 } 00:20:46.675 ], 00:20:46.675 "driver_specific": {} 00:20:46.675 } 00:20:46.675 ] 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.675 "name": "Existed_Raid", 00:20:46.675 "uuid": "a4ec98d3-ac2f-4216-8dfa-6a263bcc4418", 00:20:46.675 "strip_size_kb": 0, 00:20:46.675 "state": "online", 00:20:46.675 "raid_level": "raid1", 00:20:46.675 "superblock": true, 00:20:46.675 "num_base_bdevs": 3, 00:20:46.675 "num_base_bdevs_discovered": 3, 00:20:46.675 "num_base_bdevs_operational": 3, 00:20:46.675 "base_bdevs_list": [ 00:20:46.675 { 00:20:46.675 "name": "BaseBdev1", 00:20:46.675 "uuid": "c99838e6-61b9-4497-bd9f-cfdd56bb7599", 00:20:46.675 "is_configured": true, 00:20:46.675 "data_offset": 2048, 00:20:46.675 "data_size": 63488 00:20:46.675 }, 00:20:46.675 { 00:20:46.675 "name": "BaseBdev2", 00:20:46.675 "uuid": "82343eec-320b-45e6-b564-b85dfb87f069", 00:20:46.675 "is_configured": true, 00:20:46.675 "data_offset": 2048, 00:20:46.675 "data_size": 63488 00:20:46.675 }, 00:20:46.675 { 00:20:46.675 "name": "BaseBdev3", 00:20:46.675 "uuid": "faf4bd55-83e2-49d8-9789-a62c4c3bdd0f", 00:20:46.675 "is_configured": true, 00:20:46.675 "data_offset": 2048, 00:20:46.675 "data_size": 63488 00:20:46.675 } 00:20:46.675 ] 00:20:46.675 }' 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.675 11:02:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:47.241 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:47.241 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:47.241 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:47.241 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:47.241 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:47.242 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:47.242 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:47.242 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:47.500 [2024-07-25 11:02:54.556563] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:47.500 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:47.500 "name": "Existed_Raid", 00:20:47.500 "aliases": [ 00:20:47.500 "a4ec98d3-ac2f-4216-8dfa-6a263bcc4418" 00:20:47.500 ], 00:20:47.500 "product_name": "Raid Volume", 00:20:47.500 "block_size": 512, 00:20:47.500 "num_blocks": 63488, 00:20:47.500 "uuid": "a4ec98d3-ac2f-4216-8dfa-6a263bcc4418", 00:20:47.500 "assigned_rate_limits": { 00:20:47.500 "rw_ios_per_sec": 0, 00:20:47.500 "rw_mbytes_per_sec": 0, 00:20:47.500 "r_mbytes_per_sec": 0, 00:20:47.500 "w_mbytes_per_sec": 0 00:20:47.500 }, 00:20:47.500 "claimed": false, 00:20:47.500 "zoned": false, 00:20:47.500 "supported_io_types": { 00:20:47.500 "read": true, 00:20:47.500 "write": true, 00:20:47.500 "unmap": false, 00:20:47.500 "flush": false, 00:20:47.500 "reset": true, 00:20:47.500 "nvme_admin": false, 00:20:47.500 "nvme_io": false, 00:20:47.500 "nvme_io_md": false, 00:20:47.500 "write_zeroes": true, 00:20:47.500 "zcopy": false, 00:20:47.500 "get_zone_info": false, 00:20:47.500 "zone_management": false, 00:20:47.500 "zone_append": false, 00:20:47.500 "compare": false, 00:20:47.500 "compare_and_write": false, 00:20:47.500 "abort": false, 00:20:47.500 "seek_hole": false, 00:20:47.500 "seek_data": false, 00:20:47.500 "copy": false, 00:20:47.500 "nvme_iov_md": false 00:20:47.500 }, 00:20:47.500 "memory_domains": [ 00:20:47.500 { 00:20:47.500 "dma_device_id": "system", 00:20:47.500 "dma_device_type": 1 00:20:47.500 }, 00:20:47.500 { 00:20:47.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.500 "dma_device_type": 2 00:20:47.500 }, 00:20:47.500 { 00:20:47.500 "dma_device_id": "system", 00:20:47.500 "dma_device_type": 1 00:20:47.500 }, 00:20:47.500 { 00:20:47.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.500 "dma_device_type": 2 00:20:47.500 }, 00:20:47.500 { 00:20:47.500 "dma_device_id": "system", 00:20:47.500 "dma_device_type": 1 00:20:47.500 }, 00:20:47.500 { 00:20:47.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.500 "dma_device_type": 2 00:20:47.500 } 00:20:47.500 ], 00:20:47.500 "driver_specific": { 00:20:47.500 "raid": { 00:20:47.500 "uuid": "a4ec98d3-ac2f-4216-8dfa-6a263bcc4418", 00:20:47.500 "strip_size_kb": 0, 00:20:47.500 "state": "online", 00:20:47.500 "raid_level": "raid1", 00:20:47.500 "superblock": true, 00:20:47.500 "num_base_bdevs": 3, 00:20:47.500 "num_base_bdevs_discovered": 3, 00:20:47.500 "num_base_bdevs_operational": 3, 00:20:47.500 "base_bdevs_list": [ 00:20:47.500 { 00:20:47.500 "name": "BaseBdev1", 00:20:47.500 "uuid": "c99838e6-61b9-4497-bd9f-cfdd56bb7599", 00:20:47.500 "is_configured": true, 00:20:47.500 "data_offset": 2048, 00:20:47.500 "data_size": 63488 00:20:47.500 }, 00:20:47.500 { 00:20:47.500 "name": "BaseBdev2", 00:20:47.500 "uuid": "82343eec-320b-45e6-b564-b85dfb87f069", 00:20:47.500 "is_configured": true, 00:20:47.500 "data_offset": 2048, 00:20:47.500 "data_size": 63488 00:20:47.500 }, 00:20:47.500 { 00:20:47.500 "name": "BaseBdev3", 00:20:47.500 "uuid": "faf4bd55-83e2-49d8-9789-a62c4c3bdd0f", 00:20:47.500 "is_configured": true, 00:20:47.500 "data_offset": 2048, 00:20:47.500 "data_size": 63488 00:20:47.500 } 00:20:47.500 ] 00:20:47.500 } 00:20:47.500 } 00:20:47.500 }' 00:20:47.500 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:47.759 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:47.759 BaseBdev2 00:20:47.759 BaseBdev3' 00:20:47.759 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:47.759 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:47.759 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:47.759 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:47.759 "name": "BaseBdev1", 00:20:47.759 "aliases": [ 00:20:47.759 "c99838e6-61b9-4497-bd9f-cfdd56bb7599" 00:20:47.759 ], 00:20:47.759 "product_name": "Malloc disk", 00:20:47.759 "block_size": 512, 00:20:47.759 "num_blocks": 65536, 00:20:47.759 "uuid": "c99838e6-61b9-4497-bd9f-cfdd56bb7599", 00:20:47.759 "assigned_rate_limits": { 00:20:47.759 "rw_ios_per_sec": 0, 00:20:47.759 "rw_mbytes_per_sec": 0, 00:20:47.759 "r_mbytes_per_sec": 0, 00:20:47.759 "w_mbytes_per_sec": 0 00:20:47.759 }, 00:20:47.759 "claimed": true, 00:20:47.759 "claim_type": "exclusive_write", 00:20:47.759 "zoned": false, 00:20:47.759 "supported_io_types": { 00:20:47.759 "read": true, 00:20:47.759 "write": true, 00:20:47.759 "unmap": true, 00:20:47.759 "flush": true, 00:20:47.759 "reset": true, 00:20:47.759 "nvme_admin": false, 00:20:47.759 "nvme_io": false, 00:20:47.759 "nvme_io_md": false, 00:20:47.759 "write_zeroes": true, 00:20:47.759 "zcopy": true, 00:20:47.759 "get_zone_info": false, 00:20:47.759 "zone_management": false, 00:20:47.759 "zone_append": false, 00:20:47.759 "compare": false, 00:20:47.759 "compare_and_write": false, 00:20:47.759 "abort": true, 00:20:47.759 "seek_hole": false, 00:20:47.759 "seek_data": false, 00:20:47.759 "copy": true, 00:20:47.759 "nvme_iov_md": false 00:20:47.759 }, 00:20:47.759 "memory_domains": [ 00:20:47.759 { 00:20:47.759 "dma_device_id": "system", 00:20:47.759 "dma_device_type": 1 00:20:47.759 }, 00:20:47.759 { 00:20:47.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:47.759 "dma_device_type": 2 00:20:47.759 } 00:20:47.759 ], 00:20:47.759 "driver_specific": {} 00:20:47.759 }' 00:20:47.759 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.018 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.018 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:48.018 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.018 11:02:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.018 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:48.018 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.018 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.018 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:48.018 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.276 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.276 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:48.277 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:48.277 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:48.277 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:48.535 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:48.535 "name": "BaseBdev2", 00:20:48.535 "aliases": [ 00:20:48.535 "82343eec-320b-45e6-b564-b85dfb87f069" 00:20:48.535 ], 00:20:48.535 "product_name": "Malloc disk", 00:20:48.535 "block_size": 512, 00:20:48.535 "num_blocks": 65536, 00:20:48.535 "uuid": "82343eec-320b-45e6-b564-b85dfb87f069", 00:20:48.535 "assigned_rate_limits": { 00:20:48.535 "rw_ios_per_sec": 0, 00:20:48.535 "rw_mbytes_per_sec": 0, 00:20:48.535 "r_mbytes_per_sec": 0, 00:20:48.535 "w_mbytes_per_sec": 0 00:20:48.535 }, 00:20:48.535 "claimed": true, 00:20:48.535 "claim_type": "exclusive_write", 00:20:48.535 "zoned": false, 00:20:48.535 "supported_io_types": { 00:20:48.535 "read": true, 00:20:48.535 "write": true, 00:20:48.535 "unmap": true, 00:20:48.535 "flush": true, 00:20:48.535 "reset": true, 00:20:48.535 "nvme_admin": false, 00:20:48.535 "nvme_io": false, 00:20:48.535 "nvme_io_md": false, 00:20:48.535 "write_zeroes": true, 00:20:48.535 "zcopy": true, 00:20:48.535 "get_zone_info": false, 00:20:48.535 "zone_management": false, 00:20:48.535 "zone_append": false, 00:20:48.535 "compare": false, 00:20:48.535 "compare_and_write": false, 00:20:48.535 "abort": true, 00:20:48.535 "seek_hole": false, 00:20:48.535 "seek_data": false, 00:20:48.535 "copy": true, 00:20:48.535 "nvme_iov_md": false 00:20:48.535 }, 00:20:48.535 "memory_domains": [ 00:20:48.535 { 00:20:48.535 "dma_device_id": "system", 00:20:48.535 "dma_device_type": 1 00:20:48.535 }, 00:20:48.535 { 00:20:48.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:48.535 "dma_device_type": 2 00:20:48.535 } 00:20:48.535 ], 00:20:48.535 "driver_specific": {} 00:20:48.535 }' 00:20:48.535 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.535 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:48.535 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:48.535 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.535 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:48.535 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:48.535 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.535 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:48.794 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:48.794 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.794 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:48.794 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:48.794 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:48.794 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:48.794 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:49.053 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:49.053 "name": "BaseBdev3", 00:20:49.053 "aliases": [ 00:20:49.053 "faf4bd55-83e2-49d8-9789-a62c4c3bdd0f" 00:20:49.053 ], 00:20:49.053 "product_name": "Malloc disk", 00:20:49.053 "block_size": 512, 00:20:49.053 "num_blocks": 65536, 00:20:49.053 "uuid": "faf4bd55-83e2-49d8-9789-a62c4c3bdd0f", 00:20:49.053 "assigned_rate_limits": { 00:20:49.053 "rw_ios_per_sec": 0, 00:20:49.053 "rw_mbytes_per_sec": 0, 00:20:49.053 "r_mbytes_per_sec": 0, 00:20:49.053 "w_mbytes_per_sec": 0 00:20:49.053 }, 00:20:49.053 "claimed": true, 00:20:49.053 "claim_type": "exclusive_write", 00:20:49.053 "zoned": false, 00:20:49.053 "supported_io_types": { 00:20:49.053 "read": true, 00:20:49.053 "write": true, 00:20:49.053 "unmap": true, 00:20:49.053 "flush": true, 00:20:49.053 "reset": true, 00:20:49.053 "nvme_admin": false, 00:20:49.053 "nvme_io": false, 00:20:49.053 "nvme_io_md": false, 00:20:49.053 "write_zeroes": true, 00:20:49.053 "zcopy": true, 00:20:49.053 "get_zone_info": false, 00:20:49.053 "zone_management": false, 00:20:49.053 "zone_append": false, 00:20:49.053 "compare": false, 00:20:49.053 "compare_and_write": false, 00:20:49.053 "abort": true, 00:20:49.053 "seek_hole": false, 00:20:49.053 "seek_data": false, 00:20:49.053 "copy": true, 00:20:49.053 "nvme_iov_md": false 00:20:49.053 }, 00:20:49.053 "memory_domains": [ 00:20:49.053 { 00:20:49.053 "dma_device_id": "system", 00:20:49.053 "dma_device_type": 1 00:20:49.053 }, 00:20:49.053 { 00:20:49.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:49.053 "dma_device_type": 2 00:20:49.053 } 00:20:49.053 ], 00:20:49.053 "driver_specific": {} 00:20:49.053 }' 00:20:49.053 11:02:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:49.053 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:49.053 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:49.053 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:49.053 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:49.053 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:49.053 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:49.312 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:49.312 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:49.312 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:49.312 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:49.312 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:49.312 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:49.570 [2024-07-25 11:02:56.501561] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.570 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.828 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.828 "name": "Existed_Raid", 00:20:49.828 "uuid": "a4ec98d3-ac2f-4216-8dfa-6a263bcc4418", 00:20:49.828 "strip_size_kb": 0, 00:20:49.828 "state": "online", 00:20:49.828 "raid_level": "raid1", 00:20:49.828 "superblock": true, 00:20:49.828 "num_base_bdevs": 3, 00:20:49.828 "num_base_bdevs_discovered": 2, 00:20:49.828 "num_base_bdevs_operational": 2, 00:20:49.828 "base_bdevs_list": [ 00:20:49.828 { 00:20:49.828 "name": null, 00:20:49.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.828 "is_configured": false, 00:20:49.828 "data_offset": 2048, 00:20:49.828 "data_size": 63488 00:20:49.828 }, 00:20:49.828 { 00:20:49.828 "name": "BaseBdev2", 00:20:49.828 "uuid": "82343eec-320b-45e6-b564-b85dfb87f069", 00:20:49.828 "is_configured": true, 00:20:49.828 "data_offset": 2048, 00:20:49.828 "data_size": 63488 00:20:49.828 }, 00:20:49.828 { 00:20:49.828 "name": "BaseBdev3", 00:20:49.828 "uuid": "faf4bd55-83e2-49d8-9789-a62c4c3bdd0f", 00:20:49.828 "is_configured": true, 00:20:49.828 "data_offset": 2048, 00:20:49.828 "data_size": 63488 00:20:49.828 } 00:20:49.828 ] 00:20:49.828 }' 00:20:49.828 11:02:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.828 11:02:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.394 11:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:50.394 11:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:50.394 11:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.394 11:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:50.653 11:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:50.653 11:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:50.653 11:02:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:51.219 [2024-07-25 11:02:58.056179] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:51.219 11:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:51.219 11:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:51.219 11:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.219 11:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:51.478 11:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:51.478 11:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:51.478 11:02:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:52.045 [2024-07-25 11:02:58.932912] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:52.045 [2024-07-25 11:02:58.933026] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:52.045 [2024-07-25 11:02:59.065201] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:52.045 [2024-07-25 11:02:59.065257] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:52.045 [2024-07-25 11:02:59.065276] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:52.045 11:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:52.045 11:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:52.045 11:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.045 11:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:52.304 11:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:52.304 11:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:52.304 11:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:52.304 11:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:52.304 11:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:52.304 11:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:52.563 BaseBdev2 00:20:52.563 11:02:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:52.563 11:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:20:52.563 11:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:52.563 11:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:52.563 11:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:52.563 11:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:52.563 11:02:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:53.129 11:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:53.387 [ 00:20:53.387 { 00:20:53.387 "name": "BaseBdev2", 00:20:53.387 "aliases": [ 00:20:53.387 "2338ec71-dbf5-4d90-8ee3-ddb61b90244e" 00:20:53.387 ], 00:20:53.387 "product_name": "Malloc disk", 00:20:53.387 "block_size": 512, 00:20:53.387 "num_blocks": 65536, 00:20:53.387 "uuid": "2338ec71-dbf5-4d90-8ee3-ddb61b90244e", 00:20:53.387 "assigned_rate_limits": { 00:20:53.387 "rw_ios_per_sec": 0, 00:20:53.387 "rw_mbytes_per_sec": 0, 00:20:53.387 "r_mbytes_per_sec": 0, 00:20:53.387 "w_mbytes_per_sec": 0 00:20:53.387 }, 00:20:53.387 "claimed": false, 00:20:53.387 "zoned": false, 00:20:53.387 "supported_io_types": { 00:20:53.387 "read": true, 00:20:53.387 "write": true, 00:20:53.387 "unmap": true, 00:20:53.387 "flush": true, 00:20:53.387 "reset": true, 00:20:53.387 "nvme_admin": false, 00:20:53.387 "nvme_io": false, 00:20:53.387 "nvme_io_md": false, 00:20:53.387 "write_zeroes": true, 00:20:53.387 "zcopy": true, 00:20:53.387 "get_zone_info": false, 00:20:53.387 "zone_management": false, 00:20:53.387 "zone_append": false, 00:20:53.387 "compare": false, 00:20:53.387 "compare_and_write": false, 00:20:53.387 "abort": true, 00:20:53.387 "seek_hole": false, 00:20:53.387 "seek_data": false, 00:20:53.387 "copy": true, 00:20:53.387 "nvme_iov_md": false 00:20:53.387 }, 00:20:53.387 "memory_domains": [ 00:20:53.387 { 00:20:53.387 "dma_device_id": "system", 00:20:53.387 "dma_device_type": 1 00:20:53.387 }, 00:20:53.387 { 00:20:53.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:53.387 "dma_device_type": 2 00:20:53.387 } 00:20:53.387 ], 00:20:53.387 "driver_specific": {} 00:20:53.387 } 00:20:53.387 ] 00:20:53.387 11:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:53.387 11:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:53.387 11:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:53.387 11:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:53.985 BaseBdev3 00:20:53.985 11:03:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:53.985 11:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:20:53.985 11:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:53.985 11:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:53.985 11:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:53.985 11:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:53.985 11:03:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:54.551 11:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:54.551 [ 00:20:54.551 { 00:20:54.551 "name": "BaseBdev3", 00:20:54.551 "aliases": [ 00:20:54.551 "9e6587c0-19bb-421a-aa17-2c46b34a7287" 00:20:54.551 ], 00:20:54.551 "product_name": "Malloc disk", 00:20:54.551 "block_size": 512, 00:20:54.551 "num_blocks": 65536, 00:20:54.551 "uuid": "9e6587c0-19bb-421a-aa17-2c46b34a7287", 00:20:54.551 "assigned_rate_limits": { 00:20:54.551 "rw_ios_per_sec": 0, 00:20:54.551 "rw_mbytes_per_sec": 0, 00:20:54.551 "r_mbytes_per_sec": 0, 00:20:54.551 "w_mbytes_per_sec": 0 00:20:54.551 }, 00:20:54.551 "claimed": false, 00:20:54.551 "zoned": false, 00:20:54.551 "supported_io_types": { 00:20:54.551 "read": true, 00:20:54.551 "write": true, 00:20:54.551 "unmap": true, 00:20:54.551 "flush": true, 00:20:54.551 "reset": true, 00:20:54.551 "nvme_admin": false, 00:20:54.551 "nvme_io": false, 00:20:54.551 "nvme_io_md": false, 00:20:54.551 "write_zeroes": true, 00:20:54.551 "zcopy": true, 00:20:54.551 "get_zone_info": false, 00:20:54.551 "zone_management": false, 00:20:54.551 "zone_append": false, 00:20:54.551 "compare": false, 00:20:54.551 "compare_and_write": false, 00:20:54.551 "abort": true, 00:20:54.551 "seek_hole": false, 00:20:54.551 "seek_data": false, 00:20:54.551 "copy": true, 00:20:54.551 "nvme_iov_md": false 00:20:54.551 }, 00:20:54.551 "memory_domains": [ 00:20:54.551 { 00:20:54.551 "dma_device_id": "system", 00:20:54.551 "dma_device_type": 1 00:20:54.551 }, 00:20:54.551 { 00:20:54.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.551 "dma_device_type": 2 00:20:54.551 } 00:20:54.551 ], 00:20:54.551 "driver_specific": {} 00:20:54.551 } 00:20:54.551 ] 00:20:54.551 11:03:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:54.551 11:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:54.551 11:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:54.551 11:03:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:55.118 [2024-07-25 11:03:02.102431] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:55.118 [2024-07-25 11:03:02.102482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:55.118 [2024-07-25 11:03:02.102513] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.118 [2024-07-25 11:03:02.104825] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:55.118 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:55.118 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:55.118 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:55.118 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:55.118 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:55.118 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:55.118 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:55.118 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:55.118 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:55.118 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:55.118 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.118 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:55.376 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:55.376 "name": "Existed_Raid", 00:20:55.376 "uuid": "1520dbfd-6a33-4c42-8856-d93b7cf1b569", 00:20:55.376 "strip_size_kb": 0, 00:20:55.376 "state": "configuring", 00:20:55.376 "raid_level": "raid1", 00:20:55.376 "superblock": true, 00:20:55.376 "num_base_bdevs": 3, 00:20:55.376 "num_base_bdevs_discovered": 2, 00:20:55.376 "num_base_bdevs_operational": 3, 00:20:55.376 "base_bdevs_list": [ 00:20:55.376 { 00:20:55.376 "name": "BaseBdev1", 00:20:55.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.376 "is_configured": false, 00:20:55.376 "data_offset": 0, 00:20:55.376 "data_size": 0 00:20:55.376 }, 00:20:55.376 { 00:20:55.376 "name": "BaseBdev2", 00:20:55.376 "uuid": "2338ec71-dbf5-4d90-8ee3-ddb61b90244e", 00:20:55.376 "is_configured": true, 00:20:55.376 "data_offset": 2048, 00:20:55.376 "data_size": 63488 00:20:55.376 }, 00:20:55.376 { 00:20:55.376 "name": "BaseBdev3", 00:20:55.376 "uuid": "9e6587c0-19bb-421a-aa17-2c46b34a7287", 00:20:55.376 "is_configured": true, 00:20:55.376 "data_offset": 2048, 00:20:55.376 "data_size": 63488 00:20:55.376 } 00:20:55.376 ] 00:20:55.376 }' 00:20:55.376 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:55.376 11:03:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:55.942 11:03:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:56.201 [2024-07-25 11:03:03.133200] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:56.201 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:56.201 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:56.201 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:56.201 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:56.201 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:56.201 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:56.201 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:56.201 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:56.201 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:56.201 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:56.201 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.201 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:56.459 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:56.459 "name": "Existed_Raid", 00:20:56.459 "uuid": "1520dbfd-6a33-4c42-8856-d93b7cf1b569", 00:20:56.459 "strip_size_kb": 0, 00:20:56.459 "state": "configuring", 00:20:56.459 "raid_level": "raid1", 00:20:56.459 "superblock": true, 00:20:56.459 "num_base_bdevs": 3, 00:20:56.459 "num_base_bdevs_discovered": 1, 00:20:56.459 "num_base_bdevs_operational": 3, 00:20:56.459 "base_bdevs_list": [ 00:20:56.459 { 00:20:56.459 "name": "BaseBdev1", 00:20:56.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.459 "is_configured": false, 00:20:56.459 "data_offset": 0, 00:20:56.459 "data_size": 0 00:20:56.459 }, 00:20:56.459 { 00:20:56.459 "name": null, 00:20:56.459 "uuid": "2338ec71-dbf5-4d90-8ee3-ddb61b90244e", 00:20:56.459 "is_configured": false, 00:20:56.459 "data_offset": 2048, 00:20:56.459 "data_size": 63488 00:20:56.459 }, 00:20:56.459 { 00:20:56.459 "name": "BaseBdev3", 00:20:56.459 "uuid": "9e6587c0-19bb-421a-aa17-2c46b34a7287", 00:20:56.459 "is_configured": true, 00:20:56.459 "data_offset": 2048, 00:20:56.459 "data_size": 63488 00:20:56.459 } 00:20:56.459 ] 00:20:56.459 }' 00:20:56.459 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:56.459 11:03:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:57.025 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.025 11:03:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:57.283 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:57.283 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:57.542 [2024-07-25 11:03:04.451275] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:57.542 BaseBdev1 00:20:57.542 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:57.542 11:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:20:57.542 11:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:20:57.542 11:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:20:57.542 11:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:20:57.542 11:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:20:57.542 11:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:57.800 11:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:58.059 [ 00:20:58.059 { 00:20:58.059 "name": "BaseBdev1", 00:20:58.059 "aliases": [ 00:20:58.059 "f7612c89-41d9-445d-8aa2-d67d68019aec" 00:20:58.059 ], 00:20:58.059 "product_name": "Malloc disk", 00:20:58.059 "block_size": 512, 00:20:58.059 "num_blocks": 65536, 00:20:58.059 "uuid": "f7612c89-41d9-445d-8aa2-d67d68019aec", 00:20:58.059 "assigned_rate_limits": { 00:20:58.059 "rw_ios_per_sec": 0, 00:20:58.059 "rw_mbytes_per_sec": 0, 00:20:58.059 "r_mbytes_per_sec": 0, 00:20:58.059 "w_mbytes_per_sec": 0 00:20:58.059 }, 00:20:58.059 "claimed": true, 00:20:58.059 "claim_type": "exclusive_write", 00:20:58.059 "zoned": false, 00:20:58.059 "supported_io_types": { 00:20:58.059 "read": true, 00:20:58.059 "write": true, 00:20:58.059 "unmap": true, 00:20:58.059 "flush": true, 00:20:58.059 "reset": true, 00:20:58.059 "nvme_admin": false, 00:20:58.059 "nvme_io": false, 00:20:58.059 "nvme_io_md": false, 00:20:58.059 "write_zeroes": true, 00:20:58.059 "zcopy": true, 00:20:58.059 "get_zone_info": false, 00:20:58.059 "zone_management": false, 00:20:58.059 "zone_append": false, 00:20:58.059 "compare": false, 00:20:58.059 "compare_and_write": false, 00:20:58.059 "abort": true, 00:20:58.059 "seek_hole": false, 00:20:58.059 "seek_data": false, 00:20:58.059 "copy": true, 00:20:58.059 "nvme_iov_md": false 00:20:58.059 }, 00:20:58.059 "memory_domains": [ 00:20:58.059 { 00:20:58.059 "dma_device_id": "system", 00:20:58.059 "dma_device_type": 1 00:20:58.059 }, 00:20:58.059 { 00:20:58.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:58.059 "dma_device_type": 2 00:20:58.059 } 00:20:58.059 ], 00:20:58.059 "driver_specific": {} 00:20:58.059 } 00:20:58.059 ] 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.059 11:03:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:58.059 11:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:58.059 "name": "Existed_Raid", 00:20:58.059 "uuid": "1520dbfd-6a33-4c42-8856-d93b7cf1b569", 00:20:58.059 "strip_size_kb": 0, 00:20:58.059 "state": "configuring", 00:20:58.059 "raid_level": "raid1", 00:20:58.059 "superblock": true, 00:20:58.059 "num_base_bdevs": 3, 00:20:58.059 "num_base_bdevs_discovered": 2, 00:20:58.059 "num_base_bdevs_operational": 3, 00:20:58.059 "base_bdevs_list": [ 00:20:58.059 { 00:20:58.059 "name": "BaseBdev1", 00:20:58.059 "uuid": "f7612c89-41d9-445d-8aa2-d67d68019aec", 00:20:58.059 "is_configured": true, 00:20:58.059 "data_offset": 2048, 00:20:58.059 "data_size": 63488 00:20:58.059 }, 00:20:58.059 { 00:20:58.059 "name": null, 00:20:58.060 "uuid": "2338ec71-dbf5-4d90-8ee3-ddb61b90244e", 00:20:58.060 "is_configured": false, 00:20:58.060 "data_offset": 2048, 00:20:58.060 "data_size": 63488 00:20:58.060 }, 00:20:58.060 { 00:20:58.060 "name": "BaseBdev3", 00:20:58.060 "uuid": "9e6587c0-19bb-421a-aa17-2c46b34a7287", 00:20:58.060 "is_configured": true, 00:20:58.060 "data_offset": 2048, 00:20:58.060 "data_size": 63488 00:20:58.060 } 00:20:58.060 ] 00:20:58.060 }' 00:20:58.060 11:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:58.060 11:03:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.995 11:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.995 11:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:58.995 11:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:58.995 11:03:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:59.254 [2024-07-25 11:03:06.172039] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:59.254 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:59.254 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:59.254 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:59.254 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:59.254 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:59.254 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:59.254 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:59.254 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:59.254 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:59.254 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:59.254 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.254 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:59.512 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:59.512 "name": "Existed_Raid", 00:20:59.512 "uuid": "1520dbfd-6a33-4c42-8856-d93b7cf1b569", 00:20:59.512 "strip_size_kb": 0, 00:20:59.512 "state": "configuring", 00:20:59.512 "raid_level": "raid1", 00:20:59.512 "superblock": true, 00:20:59.512 "num_base_bdevs": 3, 00:20:59.512 "num_base_bdevs_discovered": 1, 00:20:59.512 "num_base_bdevs_operational": 3, 00:20:59.512 "base_bdevs_list": [ 00:20:59.512 { 00:20:59.512 "name": "BaseBdev1", 00:20:59.512 "uuid": "f7612c89-41d9-445d-8aa2-d67d68019aec", 00:20:59.512 "is_configured": true, 00:20:59.512 "data_offset": 2048, 00:20:59.512 "data_size": 63488 00:20:59.512 }, 00:20:59.512 { 00:20:59.512 "name": null, 00:20:59.512 "uuid": "2338ec71-dbf5-4d90-8ee3-ddb61b90244e", 00:20:59.512 "is_configured": false, 00:20:59.512 "data_offset": 2048, 00:20:59.512 "data_size": 63488 00:20:59.512 }, 00:20:59.512 { 00:20:59.512 "name": null, 00:20:59.512 "uuid": "9e6587c0-19bb-421a-aa17-2c46b34a7287", 00:20:59.512 "is_configured": false, 00:20:59.512 "data_offset": 2048, 00:20:59.512 "data_size": 63488 00:20:59.512 } 00:20:59.512 ] 00:20:59.512 }' 00:20:59.512 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:59.512 11:03:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:00.078 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.078 11:03:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:00.337 [2024-07-25 11:03:07.431487] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.337 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:00.595 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.595 "name": "Existed_Raid", 00:21:00.595 "uuid": "1520dbfd-6a33-4c42-8856-d93b7cf1b569", 00:21:00.595 "strip_size_kb": 0, 00:21:00.595 "state": "configuring", 00:21:00.595 "raid_level": "raid1", 00:21:00.595 "superblock": true, 00:21:00.595 "num_base_bdevs": 3, 00:21:00.595 "num_base_bdevs_discovered": 2, 00:21:00.595 "num_base_bdevs_operational": 3, 00:21:00.595 "base_bdevs_list": [ 00:21:00.595 { 00:21:00.595 "name": "BaseBdev1", 00:21:00.595 "uuid": "f7612c89-41d9-445d-8aa2-d67d68019aec", 00:21:00.595 "is_configured": true, 00:21:00.595 "data_offset": 2048, 00:21:00.595 "data_size": 63488 00:21:00.595 }, 00:21:00.595 { 00:21:00.595 "name": null, 00:21:00.595 "uuid": "2338ec71-dbf5-4d90-8ee3-ddb61b90244e", 00:21:00.595 "is_configured": false, 00:21:00.595 "data_offset": 2048, 00:21:00.595 "data_size": 63488 00:21:00.595 }, 00:21:00.595 { 00:21:00.595 "name": "BaseBdev3", 00:21:00.595 "uuid": "9e6587c0-19bb-421a-aa17-2c46b34a7287", 00:21:00.595 "is_configured": true, 00:21:00.595 "data_offset": 2048, 00:21:00.595 "data_size": 63488 00:21:00.595 } 00:21:00.595 ] 00:21:00.595 }' 00:21:00.595 11:03:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.595 11:03:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:01.161 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.161 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:01.419 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:01.419 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:01.679 [2024-07-25 11:03:08.574570] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:01.679 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:01.679 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:01.679 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:01.679 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:01.679 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:01.679 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:01.679 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:01.679 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:01.679 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:01.679 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:01.679 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.679 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:01.937 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:01.937 "name": "Existed_Raid", 00:21:01.937 "uuid": "1520dbfd-6a33-4c42-8856-d93b7cf1b569", 00:21:01.937 "strip_size_kb": 0, 00:21:01.937 "state": "configuring", 00:21:01.937 "raid_level": "raid1", 00:21:01.937 "superblock": true, 00:21:01.937 "num_base_bdevs": 3, 00:21:01.937 "num_base_bdevs_discovered": 1, 00:21:01.937 "num_base_bdevs_operational": 3, 00:21:01.937 "base_bdevs_list": [ 00:21:01.937 { 00:21:01.937 "name": null, 00:21:01.937 "uuid": "f7612c89-41d9-445d-8aa2-d67d68019aec", 00:21:01.937 "is_configured": false, 00:21:01.938 "data_offset": 2048, 00:21:01.938 "data_size": 63488 00:21:01.938 }, 00:21:01.938 { 00:21:01.938 "name": null, 00:21:01.938 "uuid": "2338ec71-dbf5-4d90-8ee3-ddb61b90244e", 00:21:01.938 "is_configured": false, 00:21:01.938 "data_offset": 2048, 00:21:01.938 "data_size": 63488 00:21:01.938 }, 00:21:01.938 { 00:21:01.938 "name": "BaseBdev3", 00:21:01.938 "uuid": "9e6587c0-19bb-421a-aa17-2c46b34a7287", 00:21:01.938 "is_configured": true, 00:21:01.938 "data_offset": 2048, 00:21:01.938 "data_size": 63488 00:21:01.938 } 00:21:01.938 ] 00:21:01.938 }' 00:21:01.938 11:03:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:01.938 11:03:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:02.504 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.504 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:02.762 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:02.762 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:03.021 [2024-07-25 11:03:09.978684] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:03.021 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:21:03.021 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:03.021 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:03.021 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:03.021 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:03.021 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:03.021 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:03.021 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:03.021 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:03.021 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:03.021 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.021 11:03:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:03.280 11:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:03.280 "name": "Existed_Raid", 00:21:03.280 "uuid": "1520dbfd-6a33-4c42-8856-d93b7cf1b569", 00:21:03.280 "strip_size_kb": 0, 00:21:03.280 "state": "configuring", 00:21:03.280 "raid_level": "raid1", 00:21:03.280 "superblock": true, 00:21:03.280 "num_base_bdevs": 3, 00:21:03.280 "num_base_bdevs_discovered": 2, 00:21:03.280 "num_base_bdevs_operational": 3, 00:21:03.280 "base_bdevs_list": [ 00:21:03.280 { 00:21:03.280 "name": null, 00:21:03.280 "uuid": "f7612c89-41d9-445d-8aa2-d67d68019aec", 00:21:03.280 "is_configured": false, 00:21:03.280 "data_offset": 2048, 00:21:03.280 "data_size": 63488 00:21:03.280 }, 00:21:03.280 { 00:21:03.280 "name": "BaseBdev2", 00:21:03.280 "uuid": "2338ec71-dbf5-4d90-8ee3-ddb61b90244e", 00:21:03.280 "is_configured": true, 00:21:03.280 "data_offset": 2048, 00:21:03.280 "data_size": 63488 00:21:03.280 }, 00:21:03.280 { 00:21:03.280 "name": "BaseBdev3", 00:21:03.280 "uuid": "9e6587c0-19bb-421a-aa17-2c46b34a7287", 00:21:03.280 "is_configured": true, 00:21:03.280 "data_offset": 2048, 00:21:03.280 "data_size": 63488 00:21:03.280 } 00:21:03.280 ] 00:21:03.280 }' 00:21:03.280 11:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:03.280 11:03:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:03.845 11:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.845 11:03:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:04.104 11:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:04.104 11:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.104 11:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:04.362 11:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f7612c89-41d9-445d-8aa2-d67d68019aec 00:21:04.620 [2024-07-25 11:03:11.536632] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:04.621 [2024-07-25 11:03:11.536868] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:04.621 [2024-07-25 11:03:11.536887] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:04.621 [2024-07-25 11:03:11.537211] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:21:04.621 [2024-07-25 11:03:11.537432] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:04.621 [2024-07-25 11:03:11.537450] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:21:04.621 [2024-07-25 11:03:11.537615] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:04.621 NewBaseBdev 00:21:04.621 11:03:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:04.621 11:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:21:04.621 11:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:04.621 11:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:21:04.621 11:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:04.621 11:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:04.621 11:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:04.879 11:03:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:05.138 [ 00:21:05.138 { 00:21:05.138 "name": "NewBaseBdev", 00:21:05.138 "aliases": [ 00:21:05.138 "f7612c89-41d9-445d-8aa2-d67d68019aec" 00:21:05.138 ], 00:21:05.138 "product_name": "Malloc disk", 00:21:05.138 "block_size": 512, 00:21:05.138 "num_blocks": 65536, 00:21:05.138 "uuid": "f7612c89-41d9-445d-8aa2-d67d68019aec", 00:21:05.138 "assigned_rate_limits": { 00:21:05.138 "rw_ios_per_sec": 0, 00:21:05.138 "rw_mbytes_per_sec": 0, 00:21:05.138 "r_mbytes_per_sec": 0, 00:21:05.138 "w_mbytes_per_sec": 0 00:21:05.138 }, 00:21:05.138 "claimed": true, 00:21:05.138 "claim_type": "exclusive_write", 00:21:05.138 "zoned": false, 00:21:05.138 "supported_io_types": { 00:21:05.138 "read": true, 00:21:05.138 "write": true, 00:21:05.138 "unmap": true, 00:21:05.138 "flush": true, 00:21:05.138 "reset": true, 00:21:05.138 "nvme_admin": false, 00:21:05.138 "nvme_io": false, 00:21:05.138 "nvme_io_md": false, 00:21:05.138 "write_zeroes": true, 00:21:05.138 "zcopy": true, 00:21:05.138 "get_zone_info": false, 00:21:05.138 "zone_management": false, 00:21:05.138 "zone_append": false, 00:21:05.138 "compare": false, 00:21:05.138 "compare_and_write": false, 00:21:05.138 "abort": true, 00:21:05.138 "seek_hole": false, 00:21:05.138 "seek_data": false, 00:21:05.138 "copy": true, 00:21:05.138 "nvme_iov_md": false 00:21:05.138 }, 00:21:05.138 "memory_domains": [ 00:21:05.138 { 00:21:05.138 "dma_device_id": "system", 00:21:05.138 "dma_device_type": 1 00:21:05.138 }, 00:21:05.138 { 00:21:05.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:05.138 "dma_device_type": 2 00:21:05.138 } 00:21:05.138 ], 00:21:05.138 "driver_specific": {} 00:21:05.138 } 00:21:05.138 ] 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.138 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.397 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.397 "name": "Existed_Raid", 00:21:05.397 "uuid": "1520dbfd-6a33-4c42-8856-d93b7cf1b569", 00:21:05.397 "strip_size_kb": 0, 00:21:05.397 "state": "online", 00:21:05.397 "raid_level": "raid1", 00:21:05.397 "superblock": true, 00:21:05.397 "num_base_bdevs": 3, 00:21:05.397 "num_base_bdevs_discovered": 3, 00:21:05.397 "num_base_bdevs_operational": 3, 00:21:05.397 "base_bdevs_list": [ 00:21:05.397 { 00:21:05.397 "name": "NewBaseBdev", 00:21:05.397 "uuid": "f7612c89-41d9-445d-8aa2-d67d68019aec", 00:21:05.397 "is_configured": true, 00:21:05.397 "data_offset": 2048, 00:21:05.397 "data_size": 63488 00:21:05.397 }, 00:21:05.397 { 00:21:05.397 "name": "BaseBdev2", 00:21:05.397 "uuid": "2338ec71-dbf5-4d90-8ee3-ddb61b90244e", 00:21:05.397 "is_configured": true, 00:21:05.397 "data_offset": 2048, 00:21:05.397 "data_size": 63488 00:21:05.397 }, 00:21:05.397 { 00:21:05.397 "name": "BaseBdev3", 00:21:05.397 "uuid": "9e6587c0-19bb-421a-aa17-2c46b34a7287", 00:21:05.397 "is_configured": true, 00:21:05.397 "data_offset": 2048, 00:21:05.397 "data_size": 63488 00:21:05.397 } 00:21:05.397 ] 00:21:05.397 }' 00:21:05.397 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.397 11:03:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.965 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:05.965 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:05.965 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:05.965 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:05.965 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:05.965 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:05.965 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:05.965 11:03:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:05.965 [2024-07-25 11:03:13.069239] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.223 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:06.224 "name": "Existed_Raid", 00:21:06.224 "aliases": [ 00:21:06.224 "1520dbfd-6a33-4c42-8856-d93b7cf1b569" 00:21:06.224 ], 00:21:06.224 "product_name": "Raid Volume", 00:21:06.224 "block_size": 512, 00:21:06.224 "num_blocks": 63488, 00:21:06.224 "uuid": "1520dbfd-6a33-4c42-8856-d93b7cf1b569", 00:21:06.224 "assigned_rate_limits": { 00:21:06.224 "rw_ios_per_sec": 0, 00:21:06.224 "rw_mbytes_per_sec": 0, 00:21:06.224 "r_mbytes_per_sec": 0, 00:21:06.224 "w_mbytes_per_sec": 0 00:21:06.224 }, 00:21:06.224 "claimed": false, 00:21:06.224 "zoned": false, 00:21:06.224 "supported_io_types": { 00:21:06.224 "read": true, 00:21:06.224 "write": true, 00:21:06.224 "unmap": false, 00:21:06.224 "flush": false, 00:21:06.224 "reset": true, 00:21:06.224 "nvme_admin": false, 00:21:06.224 "nvme_io": false, 00:21:06.224 "nvme_io_md": false, 00:21:06.224 "write_zeroes": true, 00:21:06.224 "zcopy": false, 00:21:06.224 "get_zone_info": false, 00:21:06.224 "zone_management": false, 00:21:06.224 "zone_append": false, 00:21:06.224 "compare": false, 00:21:06.224 "compare_and_write": false, 00:21:06.224 "abort": false, 00:21:06.224 "seek_hole": false, 00:21:06.224 "seek_data": false, 00:21:06.224 "copy": false, 00:21:06.224 "nvme_iov_md": false 00:21:06.224 }, 00:21:06.224 "memory_domains": [ 00:21:06.224 { 00:21:06.224 "dma_device_id": "system", 00:21:06.224 "dma_device_type": 1 00:21:06.224 }, 00:21:06.224 { 00:21:06.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.224 "dma_device_type": 2 00:21:06.224 }, 00:21:06.224 { 00:21:06.224 "dma_device_id": "system", 00:21:06.224 "dma_device_type": 1 00:21:06.224 }, 00:21:06.224 { 00:21:06.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.224 "dma_device_type": 2 00:21:06.224 }, 00:21:06.224 { 00:21:06.224 "dma_device_id": "system", 00:21:06.224 "dma_device_type": 1 00:21:06.224 }, 00:21:06.224 { 00:21:06.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.224 "dma_device_type": 2 00:21:06.224 } 00:21:06.224 ], 00:21:06.224 "driver_specific": { 00:21:06.224 "raid": { 00:21:06.224 "uuid": "1520dbfd-6a33-4c42-8856-d93b7cf1b569", 00:21:06.224 "strip_size_kb": 0, 00:21:06.224 "state": "online", 00:21:06.224 "raid_level": "raid1", 00:21:06.224 "superblock": true, 00:21:06.224 "num_base_bdevs": 3, 00:21:06.224 "num_base_bdevs_discovered": 3, 00:21:06.224 "num_base_bdevs_operational": 3, 00:21:06.224 "base_bdevs_list": [ 00:21:06.224 { 00:21:06.224 "name": "NewBaseBdev", 00:21:06.224 "uuid": "f7612c89-41d9-445d-8aa2-d67d68019aec", 00:21:06.224 "is_configured": true, 00:21:06.224 "data_offset": 2048, 00:21:06.224 "data_size": 63488 00:21:06.224 }, 00:21:06.224 { 00:21:06.224 "name": "BaseBdev2", 00:21:06.224 "uuid": "2338ec71-dbf5-4d90-8ee3-ddb61b90244e", 00:21:06.224 "is_configured": true, 00:21:06.224 "data_offset": 2048, 00:21:06.224 "data_size": 63488 00:21:06.224 }, 00:21:06.224 { 00:21:06.224 "name": "BaseBdev3", 00:21:06.224 "uuid": "9e6587c0-19bb-421a-aa17-2c46b34a7287", 00:21:06.224 "is_configured": true, 00:21:06.224 "data_offset": 2048, 00:21:06.224 "data_size": 63488 00:21:06.224 } 00:21:06.224 ] 00:21:06.224 } 00:21:06.224 } 00:21:06.224 }' 00:21:06.224 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:06.224 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:06.224 BaseBdev2 00:21:06.224 BaseBdev3' 00:21:06.224 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:06.224 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:06.224 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:06.483 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:06.483 "name": "NewBaseBdev", 00:21:06.483 "aliases": [ 00:21:06.483 "f7612c89-41d9-445d-8aa2-d67d68019aec" 00:21:06.483 ], 00:21:06.483 "product_name": "Malloc disk", 00:21:06.483 "block_size": 512, 00:21:06.483 "num_blocks": 65536, 00:21:06.483 "uuid": "f7612c89-41d9-445d-8aa2-d67d68019aec", 00:21:06.483 "assigned_rate_limits": { 00:21:06.483 "rw_ios_per_sec": 0, 00:21:06.483 "rw_mbytes_per_sec": 0, 00:21:06.483 "r_mbytes_per_sec": 0, 00:21:06.483 "w_mbytes_per_sec": 0 00:21:06.483 }, 00:21:06.483 "claimed": true, 00:21:06.483 "claim_type": "exclusive_write", 00:21:06.483 "zoned": false, 00:21:06.483 "supported_io_types": { 00:21:06.483 "read": true, 00:21:06.483 "write": true, 00:21:06.483 "unmap": true, 00:21:06.483 "flush": true, 00:21:06.483 "reset": true, 00:21:06.483 "nvme_admin": false, 00:21:06.483 "nvme_io": false, 00:21:06.483 "nvme_io_md": false, 00:21:06.483 "write_zeroes": true, 00:21:06.483 "zcopy": true, 00:21:06.483 "get_zone_info": false, 00:21:06.483 "zone_management": false, 00:21:06.483 "zone_append": false, 00:21:06.483 "compare": false, 00:21:06.483 "compare_and_write": false, 00:21:06.483 "abort": true, 00:21:06.483 "seek_hole": false, 00:21:06.483 "seek_data": false, 00:21:06.483 "copy": true, 00:21:06.483 "nvme_iov_md": false 00:21:06.483 }, 00:21:06.483 "memory_domains": [ 00:21:06.483 { 00:21:06.483 "dma_device_id": "system", 00:21:06.483 "dma_device_type": 1 00:21:06.483 }, 00:21:06.483 { 00:21:06.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:06.483 "dma_device_type": 2 00:21:06.483 } 00:21:06.483 ], 00:21:06.483 "driver_specific": {} 00:21:06.483 }' 00:21:06.483 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.483 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:06.483 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:06.483 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.483 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:06.483 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:06.483 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.483 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:06.741 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:06.741 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.741 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:06.741 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:06.741 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:06.741 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:06.741 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:07.039 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:07.039 "name": "BaseBdev2", 00:21:07.039 "aliases": [ 00:21:07.039 "2338ec71-dbf5-4d90-8ee3-ddb61b90244e" 00:21:07.039 ], 00:21:07.039 "product_name": "Malloc disk", 00:21:07.039 "block_size": 512, 00:21:07.039 "num_blocks": 65536, 00:21:07.039 "uuid": "2338ec71-dbf5-4d90-8ee3-ddb61b90244e", 00:21:07.039 "assigned_rate_limits": { 00:21:07.039 "rw_ios_per_sec": 0, 00:21:07.039 "rw_mbytes_per_sec": 0, 00:21:07.039 "r_mbytes_per_sec": 0, 00:21:07.039 "w_mbytes_per_sec": 0 00:21:07.039 }, 00:21:07.039 "claimed": true, 00:21:07.039 "claim_type": "exclusive_write", 00:21:07.039 "zoned": false, 00:21:07.039 "supported_io_types": { 00:21:07.039 "read": true, 00:21:07.039 "write": true, 00:21:07.039 "unmap": true, 00:21:07.039 "flush": true, 00:21:07.039 "reset": true, 00:21:07.039 "nvme_admin": false, 00:21:07.039 "nvme_io": false, 00:21:07.039 "nvme_io_md": false, 00:21:07.039 "write_zeroes": true, 00:21:07.039 "zcopy": true, 00:21:07.039 "get_zone_info": false, 00:21:07.039 "zone_management": false, 00:21:07.039 "zone_append": false, 00:21:07.039 "compare": false, 00:21:07.039 "compare_and_write": false, 00:21:07.039 "abort": true, 00:21:07.039 "seek_hole": false, 00:21:07.039 "seek_data": false, 00:21:07.039 "copy": true, 00:21:07.039 "nvme_iov_md": false 00:21:07.039 }, 00:21:07.039 "memory_domains": [ 00:21:07.039 { 00:21:07.039 "dma_device_id": "system", 00:21:07.039 "dma_device_type": 1 00:21:07.039 }, 00:21:07.039 { 00:21:07.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.039 "dma_device_type": 2 00:21:07.039 } 00:21:07.039 ], 00:21:07.039 "driver_specific": {} 00:21:07.039 }' 00:21:07.039 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:07.039 11:03:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:07.039 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:07.039 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:07.039 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:07.039 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:07.311 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:07.311 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:07.311 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:07.311 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:07.311 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:07.311 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:07.311 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:07.311 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:07.311 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:07.569 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:07.569 "name": "BaseBdev3", 00:21:07.569 "aliases": [ 00:21:07.569 "9e6587c0-19bb-421a-aa17-2c46b34a7287" 00:21:07.569 ], 00:21:07.569 "product_name": "Malloc disk", 00:21:07.569 "block_size": 512, 00:21:07.569 "num_blocks": 65536, 00:21:07.569 "uuid": "9e6587c0-19bb-421a-aa17-2c46b34a7287", 00:21:07.569 "assigned_rate_limits": { 00:21:07.569 "rw_ios_per_sec": 0, 00:21:07.569 "rw_mbytes_per_sec": 0, 00:21:07.569 "r_mbytes_per_sec": 0, 00:21:07.569 "w_mbytes_per_sec": 0 00:21:07.569 }, 00:21:07.569 "claimed": true, 00:21:07.569 "claim_type": "exclusive_write", 00:21:07.569 "zoned": false, 00:21:07.569 "supported_io_types": { 00:21:07.569 "read": true, 00:21:07.569 "write": true, 00:21:07.569 "unmap": true, 00:21:07.569 "flush": true, 00:21:07.569 "reset": true, 00:21:07.569 "nvme_admin": false, 00:21:07.569 "nvme_io": false, 00:21:07.569 "nvme_io_md": false, 00:21:07.569 "write_zeroes": true, 00:21:07.569 "zcopy": true, 00:21:07.569 "get_zone_info": false, 00:21:07.569 "zone_management": false, 00:21:07.569 "zone_append": false, 00:21:07.569 "compare": false, 00:21:07.569 "compare_and_write": false, 00:21:07.569 "abort": true, 00:21:07.569 "seek_hole": false, 00:21:07.569 "seek_data": false, 00:21:07.569 "copy": true, 00:21:07.569 "nvme_iov_md": false 00:21:07.569 }, 00:21:07.569 "memory_domains": [ 00:21:07.569 { 00:21:07.569 "dma_device_id": "system", 00:21:07.569 "dma_device_type": 1 00:21:07.569 }, 00:21:07.569 { 00:21:07.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.569 "dma_device_type": 2 00:21:07.569 } 00:21:07.569 ], 00:21:07.569 "driver_specific": {} 00:21:07.569 }' 00:21:07.569 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:07.569 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:07.569 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:07.569 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:07.569 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:07.829 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:07.829 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:07.829 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:07.829 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:07.829 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:07.829 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:07.829 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:07.829 11:03:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:08.088 [2024-07-25 11:03:15.078293] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:08.089 [2024-07-25 11:03:15.078330] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:08.089 [2024-07-25 11:03:15.078405] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:08.089 [2024-07-25 11:03:15.078740] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:08.089 [2024-07-25 11:03:15.078761] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:21:08.089 11:03:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3621137 00:21:08.089 11:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3621137 ']' 00:21:08.089 11:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3621137 00:21:08.089 11:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:21:08.089 11:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:08.089 11:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3621137 00:21:08.089 11:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:08.089 11:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:08.089 11:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3621137' 00:21:08.089 killing process with pid 3621137 00:21:08.089 11:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3621137 00:21:08.089 [2024-07-25 11:03:15.157007] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:08.089 11:03:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3621137 00:21:08.658 [2024-07-25 11:03:15.476895] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:10.565 11:03:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:21:10.565 00:21:10.565 real 0m31.336s 00:21:10.565 user 0m54.900s 00:21:10.565 sys 0m5.363s 00:21:10.565 11:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:10.565 11:03:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.565 ************************************ 00:21:10.565 END TEST raid_state_function_test_sb 00:21:10.565 ************************************ 00:21:10.565 11:03:17 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:21:10.565 11:03:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:10.565 11:03:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:10.565 11:03:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:10.565 ************************************ 00:21:10.565 START TEST raid_superblock_test 00:21:10.565 ************************************ 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=3 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3627545 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3627545 /var/tmp/spdk-raid.sock 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3627545 ']' 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:10.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:10.565 11:03:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.565 [2024-07-25 11:03:17.410672] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:10.565 [2024-07-25 11:03:17.410807] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3627545 ] 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:10.565 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.565 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:10.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.566 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:10.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.566 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:10.566 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:10.566 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:10.566 [2024-07-25 11:03:17.634844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.825 [2024-07-25 11:03:17.928185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.394 [2024-07-25 11:03:18.262858] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.394 [2024-07-25 11:03:18.262898] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:11.394 11:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.394 11:03:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:21:11.394 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:21:11.394 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:11.394 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:21:11.394 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:21:11.394 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:11.394 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:11.394 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:11.394 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:11.394 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:21:11.654 malloc1 00:21:11.654 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:11.914 [2024-07-25 11:03:18.951473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:11.914 [2024-07-25 11:03:18.951539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.914 [2024-07-25 11:03:18.951570] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:21:11.914 [2024-07-25 11:03:18.951586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.914 [2024-07-25 11:03:18.954347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.914 [2024-07-25 11:03:18.954381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:11.914 pt1 00:21:11.914 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:11.914 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:11.914 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:21:11.914 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:21:11.914 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:11.914 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:11.914 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:11.914 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:11.914 11:03:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:21:12.173 malloc2 00:21:12.173 11:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:12.432 [2024-07-25 11:03:19.468500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:12.432 [2024-07-25 11:03:19.468552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.432 [2024-07-25 11:03:19.468579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:21:12.432 [2024-07-25 11:03:19.468594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.432 [2024-07-25 11:03:19.471296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.432 [2024-07-25 11:03:19.471334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:12.432 pt2 00:21:12.432 11:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:12.432 11:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:12.432 11:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:21:12.432 11:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:21:12.432 11:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:12.432 11:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:12.432 11:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:21:12.432 11:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:12.432 11:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:12.692 malloc3 00:21:12.692 11:03:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:12.950 [2024-07-25 11:03:19.995778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:12.950 [2024-07-25 11:03:19.995833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:12.950 [2024-07-25 11:03:19.995861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:21:12.950 [2024-07-25 11:03:19.995877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:12.950 [2024-07-25 11:03:19.998571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:12.950 [2024-07-25 11:03:19.998603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:12.950 pt3 00:21:12.950 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:21:12.950 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:21:12.950 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:13.209 [2024-07-25 11:03:20.228474] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:13.209 [2024-07-25 11:03:20.230818] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:13.209 [2024-07-25 11:03:20.230898] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:13.209 [2024-07-25 11:03:20.231105] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:13.209 [2024-07-25 11:03:20.231124] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:13.209 [2024-07-25 11:03:20.231489] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:21:13.209 [2024-07-25 11:03:20.231742] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:13.209 [2024-07-25 11:03:20.231758] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:13.209 [2024-07-25 11:03:20.231972] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:13.209 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:13.209 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:13.209 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:13.209 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:13.209 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:13.209 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:13.209 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:13.209 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:13.209 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:13.209 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:13.209 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.209 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.468 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:13.468 "name": "raid_bdev1", 00:21:13.468 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:13.468 "strip_size_kb": 0, 00:21:13.468 "state": "online", 00:21:13.468 "raid_level": "raid1", 00:21:13.468 "superblock": true, 00:21:13.468 "num_base_bdevs": 3, 00:21:13.468 "num_base_bdevs_discovered": 3, 00:21:13.468 "num_base_bdevs_operational": 3, 00:21:13.468 "base_bdevs_list": [ 00:21:13.468 { 00:21:13.468 "name": "pt1", 00:21:13.468 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:13.468 "is_configured": true, 00:21:13.468 "data_offset": 2048, 00:21:13.468 "data_size": 63488 00:21:13.468 }, 00:21:13.468 { 00:21:13.468 "name": "pt2", 00:21:13.468 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:13.468 "is_configured": true, 00:21:13.468 "data_offset": 2048, 00:21:13.468 "data_size": 63488 00:21:13.468 }, 00:21:13.468 { 00:21:13.468 "name": "pt3", 00:21:13.468 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:13.468 "is_configured": true, 00:21:13.468 "data_offset": 2048, 00:21:13.468 "data_size": 63488 00:21:13.468 } 00:21:13.468 ] 00:21:13.468 }' 00:21:13.468 11:03:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:13.468 11:03:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.035 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:21:14.035 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:14.035 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:14.035 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:14.035 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:14.035 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:14.035 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:14.035 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:14.294 [2024-07-25 11:03:21.267561] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:14.294 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:14.294 "name": "raid_bdev1", 00:21:14.294 "aliases": [ 00:21:14.294 "55c2dafb-4919-4bc4-833c-363beebdd0b5" 00:21:14.294 ], 00:21:14.294 "product_name": "Raid Volume", 00:21:14.294 "block_size": 512, 00:21:14.294 "num_blocks": 63488, 00:21:14.294 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:14.294 "assigned_rate_limits": { 00:21:14.294 "rw_ios_per_sec": 0, 00:21:14.294 "rw_mbytes_per_sec": 0, 00:21:14.294 "r_mbytes_per_sec": 0, 00:21:14.294 "w_mbytes_per_sec": 0 00:21:14.294 }, 00:21:14.294 "claimed": false, 00:21:14.294 "zoned": false, 00:21:14.294 "supported_io_types": { 00:21:14.294 "read": true, 00:21:14.294 "write": true, 00:21:14.294 "unmap": false, 00:21:14.294 "flush": false, 00:21:14.294 "reset": true, 00:21:14.294 "nvme_admin": false, 00:21:14.294 "nvme_io": false, 00:21:14.294 "nvme_io_md": false, 00:21:14.294 "write_zeroes": true, 00:21:14.294 "zcopy": false, 00:21:14.294 "get_zone_info": false, 00:21:14.294 "zone_management": false, 00:21:14.294 "zone_append": false, 00:21:14.294 "compare": false, 00:21:14.294 "compare_and_write": false, 00:21:14.294 "abort": false, 00:21:14.294 "seek_hole": false, 00:21:14.294 "seek_data": false, 00:21:14.294 "copy": false, 00:21:14.294 "nvme_iov_md": false 00:21:14.294 }, 00:21:14.294 "memory_domains": [ 00:21:14.294 { 00:21:14.294 "dma_device_id": "system", 00:21:14.294 "dma_device_type": 1 00:21:14.294 }, 00:21:14.294 { 00:21:14.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.294 "dma_device_type": 2 00:21:14.294 }, 00:21:14.294 { 00:21:14.294 "dma_device_id": "system", 00:21:14.294 "dma_device_type": 1 00:21:14.294 }, 00:21:14.294 { 00:21:14.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.294 "dma_device_type": 2 00:21:14.294 }, 00:21:14.294 { 00:21:14.294 "dma_device_id": "system", 00:21:14.294 "dma_device_type": 1 00:21:14.294 }, 00:21:14.294 { 00:21:14.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.294 "dma_device_type": 2 00:21:14.294 } 00:21:14.294 ], 00:21:14.294 "driver_specific": { 00:21:14.294 "raid": { 00:21:14.294 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:14.294 "strip_size_kb": 0, 00:21:14.294 "state": "online", 00:21:14.294 "raid_level": "raid1", 00:21:14.294 "superblock": true, 00:21:14.294 "num_base_bdevs": 3, 00:21:14.294 "num_base_bdevs_discovered": 3, 00:21:14.294 "num_base_bdevs_operational": 3, 00:21:14.294 "base_bdevs_list": [ 00:21:14.294 { 00:21:14.294 "name": "pt1", 00:21:14.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:14.294 "is_configured": true, 00:21:14.294 "data_offset": 2048, 00:21:14.294 "data_size": 63488 00:21:14.294 }, 00:21:14.294 { 00:21:14.294 "name": "pt2", 00:21:14.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:14.294 "is_configured": true, 00:21:14.294 "data_offset": 2048, 00:21:14.294 "data_size": 63488 00:21:14.294 }, 00:21:14.294 { 00:21:14.294 "name": "pt3", 00:21:14.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:14.294 "is_configured": true, 00:21:14.294 "data_offset": 2048, 00:21:14.294 "data_size": 63488 00:21:14.294 } 00:21:14.294 ] 00:21:14.294 } 00:21:14.294 } 00:21:14.294 }' 00:21:14.294 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:14.294 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:14.294 pt2 00:21:14.294 pt3' 00:21:14.294 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:14.294 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:14.294 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:14.552 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:14.552 "name": "pt1", 00:21:14.552 "aliases": [ 00:21:14.552 "00000000-0000-0000-0000-000000000001" 00:21:14.552 ], 00:21:14.552 "product_name": "passthru", 00:21:14.552 "block_size": 512, 00:21:14.552 "num_blocks": 65536, 00:21:14.552 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:14.552 "assigned_rate_limits": { 00:21:14.552 "rw_ios_per_sec": 0, 00:21:14.552 "rw_mbytes_per_sec": 0, 00:21:14.552 "r_mbytes_per_sec": 0, 00:21:14.552 "w_mbytes_per_sec": 0 00:21:14.552 }, 00:21:14.552 "claimed": true, 00:21:14.552 "claim_type": "exclusive_write", 00:21:14.552 "zoned": false, 00:21:14.552 "supported_io_types": { 00:21:14.552 "read": true, 00:21:14.552 "write": true, 00:21:14.552 "unmap": true, 00:21:14.552 "flush": true, 00:21:14.552 "reset": true, 00:21:14.552 "nvme_admin": false, 00:21:14.552 "nvme_io": false, 00:21:14.552 "nvme_io_md": false, 00:21:14.552 "write_zeroes": true, 00:21:14.552 "zcopy": true, 00:21:14.552 "get_zone_info": false, 00:21:14.552 "zone_management": false, 00:21:14.552 "zone_append": false, 00:21:14.552 "compare": false, 00:21:14.552 "compare_and_write": false, 00:21:14.552 "abort": true, 00:21:14.552 "seek_hole": false, 00:21:14.552 "seek_data": false, 00:21:14.552 "copy": true, 00:21:14.552 "nvme_iov_md": false 00:21:14.552 }, 00:21:14.552 "memory_domains": [ 00:21:14.552 { 00:21:14.552 "dma_device_id": "system", 00:21:14.552 "dma_device_type": 1 00:21:14.552 }, 00:21:14.552 { 00:21:14.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.552 "dma_device_type": 2 00:21:14.552 } 00:21:14.552 ], 00:21:14.552 "driver_specific": { 00:21:14.552 "passthru": { 00:21:14.552 "name": "pt1", 00:21:14.552 "base_bdev_name": "malloc1" 00:21:14.552 } 00:21:14.552 } 00:21:14.552 }' 00:21:14.552 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.552 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.552 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:14.552 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.810 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.810 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:14.810 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.810 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.810 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:14.810 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.810 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.810 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:14.810 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:14.810 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:14.810 11:03:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:15.068 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:15.068 "name": "pt2", 00:21:15.068 "aliases": [ 00:21:15.068 "00000000-0000-0000-0000-000000000002" 00:21:15.068 ], 00:21:15.068 "product_name": "passthru", 00:21:15.068 "block_size": 512, 00:21:15.068 "num_blocks": 65536, 00:21:15.068 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:15.068 "assigned_rate_limits": { 00:21:15.068 "rw_ios_per_sec": 0, 00:21:15.068 "rw_mbytes_per_sec": 0, 00:21:15.068 "r_mbytes_per_sec": 0, 00:21:15.069 "w_mbytes_per_sec": 0 00:21:15.069 }, 00:21:15.069 "claimed": true, 00:21:15.069 "claim_type": "exclusive_write", 00:21:15.069 "zoned": false, 00:21:15.069 "supported_io_types": { 00:21:15.069 "read": true, 00:21:15.069 "write": true, 00:21:15.069 "unmap": true, 00:21:15.069 "flush": true, 00:21:15.069 "reset": true, 00:21:15.069 "nvme_admin": false, 00:21:15.069 "nvme_io": false, 00:21:15.069 "nvme_io_md": false, 00:21:15.069 "write_zeroes": true, 00:21:15.069 "zcopy": true, 00:21:15.069 "get_zone_info": false, 00:21:15.069 "zone_management": false, 00:21:15.069 "zone_append": false, 00:21:15.069 "compare": false, 00:21:15.069 "compare_and_write": false, 00:21:15.069 "abort": true, 00:21:15.069 "seek_hole": false, 00:21:15.069 "seek_data": false, 00:21:15.069 "copy": true, 00:21:15.069 "nvme_iov_md": false 00:21:15.069 }, 00:21:15.069 "memory_domains": [ 00:21:15.069 { 00:21:15.069 "dma_device_id": "system", 00:21:15.069 "dma_device_type": 1 00:21:15.069 }, 00:21:15.069 { 00:21:15.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.069 "dma_device_type": 2 00:21:15.069 } 00:21:15.069 ], 00:21:15.069 "driver_specific": { 00:21:15.069 "passthru": { 00:21:15.069 "name": "pt2", 00:21:15.069 "base_bdev_name": "malloc2" 00:21:15.069 } 00:21:15.069 } 00:21:15.069 }' 00:21:15.069 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:15.326 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:15.326 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:15.326 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:15.326 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:15.326 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:15.326 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:15.326 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:15.326 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:15.326 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:15.326 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:15.584 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:15.584 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:15.584 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:15.584 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:15.842 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:15.842 "name": "pt3", 00:21:15.842 "aliases": [ 00:21:15.842 "00000000-0000-0000-0000-000000000003" 00:21:15.842 ], 00:21:15.842 "product_name": "passthru", 00:21:15.842 "block_size": 512, 00:21:15.842 "num_blocks": 65536, 00:21:15.842 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:15.842 "assigned_rate_limits": { 00:21:15.842 "rw_ios_per_sec": 0, 00:21:15.842 "rw_mbytes_per_sec": 0, 00:21:15.842 "r_mbytes_per_sec": 0, 00:21:15.842 "w_mbytes_per_sec": 0 00:21:15.842 }, 00:21:15.842 "claimed": true, 00:21:15.842 "claim_type": "exclusive_write", 00:21:15.842 "zoned": false, 00:21:15.842 "supported_io_types": { 00:21:15.842 "read": true, 00:21:15.842 "write": true, 00:21:15.842 "unmap": true, 00:21:15.842 "flush": true, 00:21:15.842 "reset": true, 00:21:15.842 "nvme_admin": false, 00:21:15.842 "nvme_io": false, 00:21:15.842 "nvme_io_md": false, 00:21:15.843 "write_zeroes": true, 00:21:15.843 "zcopy": true, 00:21:15.843 "get_zone_info": false, 00:21:15.843 "zone_management": false, 00:21:15.843 "zone_append": false, 00:21:15.843 "compare": false, 00:21:15.843 "compare_and_write": false, 00:21:15.843 "abort": true, 00:21:15.843 "seek_hole": false, 00:21:15.843 "seek_data": false, 00:21:15.843 "copy": true, 00:21:15.843 "nvme_iov_md": false 00:21:15.843 }, 00:21:15.843 "memory_domains": [ 00:21:15.843 { 00:21:15.843 "dma_device_id": "system", 00:21:15.843 "dma_device_type": 1 00:21:15.843 }, 00:21:15.843 { 00:21:15.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:15.843 "dma_device_type": 2 00:21:15.843 } 00:21:15.843 ], 00:21:15.843 "driver_specific": { 00:21:15.843 "passthru": { 00:21:15.843 "name": "pt3", 00:21:15.843 "base_bdev_name": "malloc3" 00:21:15.843 } 00:21:15.843 } 00:21:15.843 }' 00:21:15.843 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:15.843 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:15.843 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:15.843 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:15.843 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:15.843 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:15.843 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:15.843 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:15.843 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:15.843 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.101 11:03:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:16.101 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:16.101 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:21:16.101 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:16.359 [2024-07-25 11:03:23.252920] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:16.359 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=55c2dafb-4919-4bc4-833c-363beebdd0b5 00:21:16.359 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 55c2dafb-4919-4bc4-833c-363beebdd0b5 ']' 00:21:16.359 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:16.618 [2024-07-25 11:03:23.485190] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:16.618 [2024-07-25 11:03:23.485223] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:16.618 [2024-07-25 11:03:23.485308] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:16.618 [2024-07-25 11:03:23.485396] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:16.618 [2024-07-25 11:03:23.485415] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:16.618 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.618 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:21:16.618 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:21:16.618 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:21:16.618 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:16.618 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:16.876 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:16.876 11:03:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:17.133 11:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:21:17.133 11:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:17.390 11:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:17.390 11:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:21:17.649 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:17.907 [2024-07-25 11:03:24.844771] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:17.907 [2024-07-25 11:03:24.847136] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:17.907 [2024-07-25 11:03:24.847212] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:17.907 [2024-07-25 11:03:24.847274] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:17.907 [2024-07-25 11:03:24.847329] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:17.907 [2024-07-25 11:03:24.847357] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:17.907 [2024-07-25 11:03:24.847382] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:17.907 [2024-07-25 11:03:24.847401] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:17.907 request: 00:21:17.907 { 00:21:17.907 "name": "raid_bdev1", 00:21:17.907 "raid_level": "raid1", 00:21:17.907 "base_bdevs": [ 00:21:17.907 "malloc1", 00:21:17.907 "malloc2", 00:21:17.907 "malloc3" 00:21:17.907 ], 00:21:17.907 "superblock": false, 00:21:17.907 "method": "bdev_raid_create", 00:21:17.907 "req_id": 1 00:21:17.907 } 00:21:17.907 Got JSON-RPC error response 00:21:17.907 response: 00:21:17.907 { 00:21:17.907 "code": -17, 00:21:17.907 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:17.907 } 00:21:17.907 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:21:17.907 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:17.907 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:17.907 11:03:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:17.907 11:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.907 11:03:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:21:18.165 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:21:18.165 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:21:18.165 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:18.423 [2024-07-25 11:03:25.297946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:18.423 [2024-07-25 11:03:25.298007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:18.423 [2024-07-25 11:03:25.298037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:21:18.423 [2024-07-25 11:03:25.298052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:18.423 [2024-07-25 11:03:25.300813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:18.423 [2024-07-25 11:03:25.300847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:18.424 [2024-07-25 11:03:25.300947] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:18.424 [2024-07-25 11:03:25.301018] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:18.424 pt1 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:18.424 "name": "raid_bdev1", 00:21:18.424 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:18.424 "strip_size_kb": 0, 00:21:18.424 "state": "configuring", 00:21:18.424 "raid_level": "raid1", 00:21:18.424 "superblock": true, 00:21:18.424 "num_base_bdevs": 3, 00:21:18.424 "num_base_bdevs_discovered": 1, 00:21:18.424 "num_base_bdevs_operational": 3, 00:21:18.424 "base_bdevs_list": [ 00:21:18.424 { 00:21:18.424 "name": "pt1", 00:21:18.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:18.424 "is_configured": true, 00:21:18.424 "data_offset": 2048, 00:21:18.424 "data_size": 63488 00:21:18.424 }, 00:21:18.424 { 00:21:18.424 "name": null, 00:21:18.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:18.424 "is_configured": false, 00:21:18.424 "data_offset": 2048, 00:21:18.424 "data_size": 63488 00:21:18.424 }, 00:21:18.424 { 00:21:18.424 "name": null, 00:21:18.424 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:18.424 "is_configured": false, 00:21:18.424 "data_offset": 2048, 00:21:18.424 "data_size": 63488 00:21:18.424 } 00:21:18.424 ] 00:21:18.424 }' 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:18.424 11:03:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:19.360 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 3 -gt 2 ']' 00:21:19.360 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:19.360 [2024-07-25 11:03:26.320678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:19.360 [2024-07-25 11:03:26.320746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.360 [2024-07-25 11:03:26.320773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042080 00:21:19.360 [2024-07-25 11:03:26.320789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.360 [2024-07-25 11:03:26.321348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.360 [2024-07-25 11:03:26.321374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:19.360 [2024-07-25 11:03:26.321465] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:19.360 [2024-07-25 11:03:26.321497] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:19.360 pt2 00:21:19.360 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:19.618 [2024-07-25 11:03:26.549336] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:19.618 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:19.618 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:19.618 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:19.618 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:19.618 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:19.618 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:19.618 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.618 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.618 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.618 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.619 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.619 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.877 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.877 "name": "raid_bdev1", 00:21:19.877 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:19.877 "strip_size_kb": 0, 00:21:19.877 "state": "configuring", 00:21:19.877 "raid_level": "raid1", 00:21:19.877 "superblock": true, 00:21:19.877 "num_base_bdevs": 3, 00:21:19.877 "num_base_bdevs_discovered": 1, 00:21:19.877 "num_base_bdevs_operational": 3, 00:21:19.877 "base_bdevs_list": [ 00:21:19.877 { 00:21:19.877 "name": "pt1", 00:21:19.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:19.877 "is_configured": true, 00:21:19.877 "data_offset": 2048, 00:21:19.877 "data_size": 63488 00:21:19.877 }, 00:21:19.877 { 00:21:19.877 "name": null, 00:21:19.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:19.877 "is_configured": false, 00:21:19.877 "data_offset": 2048, 00:21:19.877 "data_size": 63488 00:21:19.877 }, 00:21:19.877 { 00:21:19.877 "name": null, 00:21:19.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:19.877 "is_configured": false, 00:21:19.877 "data_offset": 2048, 00:21:19.877 "data_size": 63488 00:21:19.877 } 00:21:19.877 ] 00:21:19.877 }' 00:21:19.877 11:03:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.877 11:03:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.445 11:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:21:20.445 11:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:20.445 11:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:21.048 [2024-07-25 11:03:27.900976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:21.048 [2024-07-25 11:03:27.901042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.048 [2024-07-25 11:03:27.901067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042380 00:21:21.048 [2024-07-25 11:03:27.901085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.048 [2024-07-25 11:03:27.901646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.048 [2024-07-25 11:03:27.901674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:21.048 [2024-07-25 11:03:27.901764] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:21.048 [2024-07-25 11:03:27.901796] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:21.048 pt2 00:21:21.048 11:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:21.048 11:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:21.048 11:03:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:21.048 [2024-07-25 11:03:28.133583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:21.048 [2024-07-25 11:03:28.133641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:21.048 [2024-07-25 11:03:28.133669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:21:21.048 [2024-07-25 11:03:28.133687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:21.048 [2024-07-25 11:03:28.134236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:21.048 [2024-07-25 11:03:28.134264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:21.048 [2024-07-25 11:03:28.134354] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:21.048 [2024-07-25 11:03:28.134388] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:21.048 [2024-07-25 11:03:28.134567] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:21.048 [2024-07-25 11:03:28.134586] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:21.048 [2024-07-25 11:03:28.134889] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:21:21.048 [2024-07-25 11:03:28.135118] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:21.048 [2024-07-25 11:03:28.135133] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:21.048 [2024-07-25 11:03:28.135336] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.048 pt3 00:21:21.048 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:21:21.048 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:21:21.049 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:21.049 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:21.049 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:21.049 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:21.049 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:21.049 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:21.049 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:21.049 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:21.049 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:21.049 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.049 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.049 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.308 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.308 "name": "raid_bdev1", 00:21:21.308 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:21.308 "strip_size_kb": 0, 00:21:21.308 "state": "online", 00:21:21.308 "raid_level": "raid1", 00:21:21.308 "superblock": true, 00:21:21.308 "num_base_bdevs": 3, 00:21:21.308 "num_base_bdevs_discovered": 3, 00:21:21.308 "num_base_bdevs_operational": 3, 00:21:21.308 "base_bdevs_list": [ 00:21:21.308 { 00:21:21.308 "name": "pt1", 00:21:21.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:21.308 "is_configured": true, 00:21:21.308 "data_offset": 2048, 00:21:21.308 "data_size": 63488 00:21:21.308 }, 00:21:21.308 { 00:21:21.308 "name": "pt2", 00:21:21.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:21.308 "is_configured": true, 00:21:21.308 "data_offset": 2048, 00:21:21.308 "data_size": 63488 00:21:21.308 }, 00:21:21.308 { 00:21:21.308 "name": "pt3", 00:21:21.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:21.308 "is_configured": true, 00:21:21.308 "data_offset": 2048, 00:21:21.308 "data_size": 63488 00:21:21.308 } 00:21:21.308 ] 00:21:21.308 }' 00:21:21.308 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.308 11:03:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.875 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:21:21.875 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:21.875 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:21.875 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:21.875 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:21.875 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:21.875 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:21.875 11:03:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:22.134 [2024-07-25 11:03:29.072630] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.134 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:22.134 "name": "raid_bdev1", 00:21:22.134 "aliases": [ 00:21:22.134 "55c2dafb-4919-4bc4-833c-363beebdd0b5" 00:21:22.134 ], 00:21:22.134 "product_name": "Raid Volume", 00:21:22.134 "block_size": 512, 00:21:22.134 "num_blocks": 63488, 00:21:22.134 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:22.134 "assigned_rate_limits": { 00:21:22.134 "rw_ios_per_sec": 0, 00:21:22.134 "rw_mbytes_per_sec": 0, 00:21:22.134 "r_mbytes_per_sec": 0, 00:21:22.134 "w_mbytes_per_sec": 0 00:21:22.134 }, 00:21:22.134 "claimed": false, 00:21:22.134 "zoned": false, 00:21:22.134 "supported_io_types": { 00:21:22.134 "read": true, 00:21:22.134 "write": true, 00:21:22.134 "unmap": false, 00:21:22.134 "flush": false, 00:21:22.134 "reset": true, 00:21:22.134 "nvme_admin": false, 00:21:22.134 "nvme_io": false, 00:21:22.134 "nvme_io_md": false, 00:21:22.134 "write_zeroes": true, 00:21:22.134 "zcopy": false, 00:21:22.134 "get_zone_info": false, 00:21:22.134 "zone_management": false, 00:21:22.134 "zone_append": false, 00:21:22.135 "compare": false, 00:21:22.135 "compare_and_write": false, 00:21:22.135 "abort": false, 00:21:22.135 "seek_hole": false, 00:21:22.135 "seek_data": false, 00:21:22.135 "copy": false, 00:21:22.135 "nvme_iov_md": false 00:21:22.135 }, 00:21:22.135 "memory_domains": [ 00:21:22.135 { 00:21:22.135 "dma_device_id": "system", 00:21:22.135 "dma_device_type": 1 00:21:22.135 }, 00:21:22.135 { 00:21:22.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.135 "dma_device_type": 2 00:21:22.135 }, 00:21:22.135 { 00:21:22.135 "dma_device_id": "system", 00:21:22.135 "dma_device_type": 1 00:21:22.135 }, 00:21:22.135 { 00:21:22.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.135 "dma_device_type": 2 00:21:22.135 }, 00:21:22.135 { 00:21:22.135 "dma_device_id": "system", 00:21:22.135 "dma_device_type": 1 00:21:22.135 }, 00:21:22.135 { 00:21:22.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.135 "dma_device_type": 2 00:21:22.135 } 00:21:22.135 ], 00:21:22.135 "driver_specific": { 00:21:22.135 "raid": { 00:21:22.135 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:22.135 "strip_size_kb": 0, 00:21:22.135 "state": "online", 00:21:22.135 "raid_level": "raid1", 00:21:22.135 "superblock": true, 00:21:22.135 "num_base_bdevs": 3, 00:21:22.135 "num_base_bdevs_discovered": 3, 00:21:22.135 "num_base_bdevs_operational": 3, 00:21:22.135 "base_bdevs_list": [ 00:21:22.135 { 00:21:22.135 "name": "pt1", 00:21:22.135 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.135 "is_configured": true, 00:21:22.135 "data_offset": 2048, 00:21:22.135 "data_size": 63488 00:21:22.135 }, 00:21:22.135 { 00:21:22.135 "name": "pt2", 00:21:22.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.135 "is_configured": true, 00:21:22.135 "data_offset": 2048, 00:21:22.135 "data_size": 63488 00:21:22.135 }, 00:21:22.135 { 00:21:22.135 "name": "pt3", 00:21:22.135 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:22.135 "is_configured": true, 00:21:22.135 "data_offset": 2048, 00:21:22.135 "data_size": 63488 00:21:22.135 } 00:21:22.135 ] 00:21:22.135 } 00:21:22.135 } 00:21:22.135 }' 00:21:22.135 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:22.135 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:22.135 pt2 00:21:22.135 pt3' 00:21:22.135 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:22.135 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:22.135 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:22.393 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:22.393 "name": "pt1", 00:21:22.393 "aliases": [ 00:21:22.393 "00000000-0000-0000-0000-000000000001" 00:21:22.393 ], 00:21:22.393 "product_name": "passthru", 00:21:22.393 "block_size": 512, 00:21:22.393 "num_blocks": 65536, 00:21:22.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:22.393 "assigned_rate_limits": { 00:21:22.393 "rw_ios_per_sec": 0, 00:21:22.393 "rw_mbytes_per_sec": 0, 00:21:22.393 "r_mbytes_per_sec": 0, 00:21:22.393 "w_mbytes_per_sec": 0 00:21:22.393 }, 00:21:22.393 "claimed": true, 00:21:22.393 "claim_type": "exclusive_write", 00:21:22.393 "zoned": false, 00:21:22.393 "supported_io_types": { 00:21:22.393 "read": true, 00:21:22.393 "write": true, 00:21:22.393 "unmap": true, 00:21:22.393 "flush": true, 00:21:22.393 "reset": true, 00:21:22.393 "nvme_admin": false, 00:21:22.393 "nvme_io": false, 00:21:22.393 "nvme_io_md": false, 00:21:22.393 "write_zeroes": true, 00:21:22.393 "zcopy": true, 00:21:22.393 "get_zone_info": false, 00:21:22.393 "zone_management": false, 00:21:22.393 "zone_append": false, 00:21:22.393 "compare": false, 00:21:22.393 "compare_and_write": false, 00:21:22.393 "abort": true, 00:21:22.393 "seek_hole": false, 00:21:22.393 "seek_data": false, 00:21:22.393 "copy": true, 00:21:22.393 "nvme_iov_md": false 00:21:22.393 }, 00:21:22.393 "memory_domains": [ 00:21:22.393 { 00:21:22.393 "dma_device_id": "system", 00:21:22.393 "dma_device_type": 1 00:21:22.393 }, 00:21:22.393 { 00:21:22.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.393 "dma_device_type": 2 00:21:22.393 } 00:21:22.393 ], 00:21:22.393 "driver_specific": { 00:21:22.393 "passthru": { 00:21:22.393 "name": "pt1", 00:21:22.393 "base_bdev_name": "malloc1" 00:21:22.393 } 00:21:22.393 } 00:21:22.393 }' 00:21:22.393 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:22.393 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:22.393 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:22.393 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:22.393 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:22.393 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:22.393 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:22.393 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:22.652 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:22.652 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:22.652 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:22.652 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:22.652 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:22.652 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:22.652 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:22.911 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:22.911 "name": "pt2", 00:21:22.911 "aliases": [ 00:21:22.911 "00000000-0000-0000-0000-000000000002" 00:21:22.911 ], 00:21:22.911 "product_name": "passthru", 00:21:22.911 "block_size": 512, 00:21:22.911 "num_blocks": 65536, 00:21:22.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:22.911 "assigned_rate_limits": { 00:21:22.911 "rw_ios_per_sec": 0, 00:21:22.911 "rw_mbytes_per_sec": 0, 00:21:22.911 "r_mbytes_per_sec": 0, 00:21:22.911 "w_mbytes_per_sec": 0 00:21:22.911 }, 00:21:22.911 "claimed": true, 00:21:22.911 "claim_type": "exclusive_write", 00:21:22.911 "zoned": false, 00:21:22.911 "supported_io_types": { 00:21:22.911 "read": true, 00:21:22.911 "write": true, 00:21:22.911 "unmap": true, 00:21:22.911 "flush": true, 00:21:22.911 "reset": true, 00:21:22.911 "nvme_admin": false, 00:21:22.911 "nvme_io": false, 00:21:22.911 "nvme_io_md": false, 00:21:22.911 "write_zeroes": true, 00:21:22.911 "zcopy": true, 00:21:22.911 "get_zone_info": false, 00:21:22.911 "zone_management": false, 00:21:22.911 "zone_append": false, 00:21:22.911 "compare": false, 00:21:22.911 "compare_and_write": false, 00:21:22.911 "abort": true, 00:21:22.911 "seek_hole": false, 00:21:22.911 "seek_data": false, 00:21:22.911 "copy": true, 00:21:22.911 "nvme_iov_md": false 00:21:22.911 }, 00:21:22.911 "memory_domains": [ 00:21:22.911 { 00:21:22.911 "dma_device_id": "system", 00:21:22.911 "dma_device_type": 1 00:21:22.911 }, 00:21:22.911 { 00:21:22.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:22.911 "dma_device_type": 2 00:21:22.911 } 00:21:22.911 ], 00:21:22.911 "driver_specific": { 00:21:22.912 "passthru": { 00:21:22.912 "name": "pt2", 00:21:22.912 "base_bdev_name": "malloc2" 00:21:22.912 } 00:21:22.912 } 00:21:22.912 }' 00:21:22.912 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:22.912 11:03:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:22.912 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:22.912 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:23.171 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:23.171 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:23.171 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:23.171 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:23.171 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:23.171 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:23.171 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:23.171 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:23.171 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:23.171 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:23.171 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:23.739 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:23.739 "name": "pt3", 00:21:23.739 "aliases": [ 00:21:23.739 "00000000-0000-0000-0000-000000000003" 00:21:23.739 ], 00:21:23.739 "product_name": "passthru", 00:21:23.739 "block_size": 512, 00:21:23.739 "num_blocks": 65536, 00:21:23.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:23.739 "assigned_rate_limits": { 00:21:23.739 "rw_ios_per_sec": 0, 00:21:23.739 "rw_mbytes_per_sec": 0, 00:21:23.739 "r_mbytes_per_sec": 0, 00:21:23.739 "w_mbytes_per_sec": 0 00:21:23.739 }, 00:21:23.739 "claimed": true, 00:21:23.739 "claim_type": "exclusive_write", 00:21:23.739 "zoned": false, 00:21:23.739 "supported_io_types": { 00:21:23.739 "read": true, 00:21:23.739 "write": true, 00:21:23.739 "unmap": true, 00:21:23.739 "flush": true, 00:21:23.739 "reset": true, 00:21:23.739 "nvme_admin": false, 00:21:23.739 "nvme_io": false, 00:21:23.739 "nvme_io_md": false, 00:21:23.739 "write_zeroes": true, 00:21:23.739 "zcopy": true, 00:21:23.739 "get_zone_info": false, 00:21:23.739 "zone_management": false, 00:21:23.739 "zone_append": false, 00:21:23.739 "compare": false, 00:21:23.739 "compare_and_write": false, 00:21:23.739 "abort": true, 00:21:23.739 "seek_hole": false, 00:21:23.739 "seek_data": false, 00:21:23.739 "copy": true, 00:21:23.739 "nvme_iov_md": false 00:21:23.739 }, 00:21:23.739 "memory_domains": [ 00:21:23.739 { 00:21:23.739 "dma_device_id": "system", 00:21:23.739 "dma_device_type": 1 00:21:23.739 }, 00:21:23.739 { 00:21:23.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.739 "dma_device_type": 2 00:21:23.739 } 00:21:23.739 ], 00:21:23.739 "driver_specific": { 00:21:23.739 "passthru": { 00:21:23.739 "name": "pt3", 00:21:23.739 "base_bdev_name": "malloc3" 00:21:23.739 } 00:21:23.739 } 00:21:23.739 }' 00:21:23.739 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:23.739 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:23.739 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:23.739 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:23.998 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:23.998 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:23.998 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:23.998 11:03:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:23.998 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:23.998 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:23.998 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:23.998 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:23.998 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:23.998 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:21:24.257 [2024-07-25 11:03:31.302734] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:24.257 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 55c2dafb-4919-4bc4-833c-363beebdd0b5 '!=' 55c2dafb-4919-4bc4-833c-363beebdd0b5 ']' 00:21:24.257 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:21:24.257 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:24.257 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:24.257 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:24.516 [2024-07-25 11:03:31.523029] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:24.516 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:24.516 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:24.516 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:24.516 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:24.516 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:24.516 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:24.516 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:24.516 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:24.516 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:24.516 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:24.516 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.516 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.775 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:24.775 "name": "raid_bdev1", 00:21:24.775 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:24.775 "strip_size_kb": 0, 00:21:24.775 "state": "online", 00:21:24.775 "raid_level": "raid1", 00:21:24.775 "superblock": true, 00:21:24.775 "num_base_bdevs": 3, 00:21:24.775 "num_base_bdevs_discovered": 2, 00:21:24.775 "num_base_bdevs_operational": 2, 00:21:24.775 "base_bdevs_list": [ 00:21:24.775 { 00:21:24.775 "name": null, 00:21:24.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.775 "is_configured": false, 00:21:24.775 "data_offset": 2048, 00:21:24.775 "data_size": 63488 00:21:24.775 }, 00:21:24.775 { 00:21:24.775 "name": "pt2", 00:21:24.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:24.775 "is_configured": true, 00:21:24.775 "data_offset": 2048, 00:21:24.775 "data_size": 63488 00:21:24.775 }, 00:21:24.775 { 00:21:24.775 "name": "pt3", 00:21:24.775 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:24.775 "is_configured": true, 00:21:24.775 "data_offset": 2048, 00:21:24.775 "data_size": 63488 00:21:24.775 } 00:21:24.775 ] 00:21:24.775 }' 00:21:24.775 11:03:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:24.775 11:03:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:25.343 11:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:25.602 [2024-07-25 11:03:32.517614] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:25.602 [2024-07-25 11:03:32.517644] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:25.602 [2024-07-25 11:03:32.517722] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:25.602 [2024-07-25 11:03:32.517790] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:25.602 [2024-07-25 11:03:32.517815] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:25.602 11:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:21:25.602 11:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.861 11:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:21:25.861 11:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:21:25.861 11:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:25.861 11:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:21:25.861 11:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:26.120 11:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:26.120 11:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:21:26.120 11:03:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:26.120 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:26.120 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:21:26.120 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:21:26.120 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:21:26.120 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:26.385 [2024-07-25 11:03:33.432037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:26.385 [2024-07-25 11:03:33.432102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.385 [2024-07-25 11:03:33.432124] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042980 00:21:26.385 [2024-07-25 11:03:33.432150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.385 [2024-07-25 11:03:33.435239] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.385 [2024-07-25 11:03:33.435275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:26.385 [2024-07-25 11:03:33.435363] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:26.385 [2024-07-25 11:03:33.435436] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:26.385 pt2 00:21:26.385 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:26.385 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:26.385 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:26.385 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:26.385 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:26.385 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:26.385 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.385 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.385 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.385 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.385 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.385 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.646 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:26.646 "name": "raid_bdev1", 00:21:26.646 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:26.646 "strip_size_kb": 0, 00:21:26.646 "state": "configuring", 00:21:26.646 "raid_level": "raid1", 00:21:26.646 "superblock": true, 00:21:26.646 "num_base_bdevs": 3, 00:21:26.646 "num_base_bdevs_discovered": 1, 00:21:26.646 "num_base_bdevs_operational": 2, 00:21:26.646 "base_bdevs_list": [ 00:21:26.646 { 00:21:26.646 "name": null, 00:21:26.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.646 "is_configured": false, 00:21:26.646 "data_offset": 2048, 00:21:26.646 "data_size": 63488 00:21:26.646 }, 00:21:26.646 { 00:21:26.646 "name": "pt2", 00:21:26.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:26.646 "is_configured": true, 00:21:26.646 "data_offset": 2048, 00:21:26.646 "data_size": 63488 00:21:26.646 }, 00:21:26.646 { 00:21:26.646 "name": null, 00:21:26.646 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:26.646 "is_configured": false, 00:21:26.646 "data_offset": 2048, 00:21:26.646 "data_size": 63488 00:21:26.646 } 00:21:26.646 ] 00:21:26.646 }' 00:21:26.646 11:03:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:26.646 11:03:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.215 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:21:27.215 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:21:27.215 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:21:27.215 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:27.474 [2024-07-25 11:03:34.434876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:27.474 [2024-07-25 11:03:34.434945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.474 [2024-07-25 11:03:34.434970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:21:27.474 [2024-07-25 11:03:34.434988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.474 [2024-07-25 11:03:34.435550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.474 [2024-07-25 11:03:34.435581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:27.474 [2024-07-25 11:03:34.435673] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:27.474 [2024-07-25 11:03:34.435704] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:27.474 [2024-07-25 11:03:34.435887] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:27.474 [2024-07-25 11:03:34.435911] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:27.474 [2024-07-25 11:03:34.436230] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:21:27.474 [2024-07-25 11:03:34.436473] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:27.474 [2024-07-25 11:03:34.436488] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:27.474 [2024-07-25 11:03:34.436696] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.474 pt3 00:21:27.474 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:27.474 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:27.474 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:27.474 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:27.474 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:27.474 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:27.474 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.474 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.474 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.474 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.474 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.474 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.733 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.733 "name": "raid_bdev1", 00:21:27.733 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:27.733 "strip_size_kb": 0, 00:21:27.733 "state": "online", 00:21:27.733 "raid_level": "raid1", 00:21:27.733 "superblock": true, 00:21:27.733 "num_base_bdevs": 3, 00:21:27.733 "num_base_bdevs_discovered": 2, 00:21:27.733 "num_base_bdevs_operational": 2, 00:21:27.733 "base_bdevs_list": [ 00:21:27.733 { 00:21:27.733 "name": null, 00:21:27.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.733 "is_configured": false, 00:21:27.733 "data_offset": 2048, 00:21:27.733 "data_size": 63488 00:21:27.733 }, 00:21:27.733 { 00:21:27.733 "name": "pt2", 00:21:27.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:27.733 "is_configured": true, 00:21:27.733 "data_offset": 2048, 00:21:27.733 "data_size": 63488 00:21:27.733 }, 00:21:27.733 { 00:21:27.733 "name": "pt3", 00:21:27.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:27.733 "is_configured": true, 00:21:27.733 "data_offset": 2048, 00:21:27.733 "data_size": 63488 00:21:27.733 } 00:21:27.733 ] 00:21:27.733 }' 00:21:27.733 11:03:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.733 11:03:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.302 11:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:28.562 [2024-07-25 11:03:35.429560] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:28.562 [2024-07-25 11:03:35.429595] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:28.562 [2024-07-25 11:03:35.429670] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:28.562 [2024-07-25 11:03:35.429746] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:28.562 [2024-07-25 11:03:35.429762] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:28.562 11:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.562 11:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:21:28.562 11:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:21:28.562 11:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:21:28.562 11:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 3 -gt 2 ']' 00:21:28.562 11:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=2 00:21:28.562 11:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:28.822 11:03:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:29.081 [2024-07-25 11:03:36.075269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:29.081 [2024-07-25 11:03:36.075328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:29.081 [2024-07-25 11:03:36.075357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:21:29.081 [2024-07-25 11:03:36.075373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:29.081 [2024-07-25 11:03:36.078179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:29.081 [2024-07-25 11:03:36.078212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:29.081 [2024-07-25 11:03:36.078313] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:29.081 [2024-07-25 11:03:36.078369] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:29.081 [2024-07-25 11:03:36.078558] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:29.081 [2024-07-25 11:03:36.078578] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:29.081 [2024-07-25 11:03:36.078602] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:29.081 [2024-07-25 11:03:36.078706] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:29.081 pt1 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 3 -gt 2 ']' 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.081 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.341 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:29.341 "name": "raid_bdev1", 00:21:29.341 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:29.341 "strip_size_kb": 0, 00:21:29.341 "state": "configuring", 00:21:29.341 "raid_level": "raid1", 00:21:29.341 "superblock": true, 00:21:29.341 "num_base_bdevs": 3, 00:21:29.341 "num_base_bdevs_discovered": 1, 00:21:29.341 "num_base_bdevs_operational": 2, 00:21:29.341 "base_bdevs_list": [ 00:21:29.341 { 00:21:29.341 "name": null, 00:21:29.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.341 "is_configured": false, 00:21:29.341 "data_offset": 2048, 00:21:29.341 "data_size": 63488 00:21:29.341 }, 00:21:29.341 { 00:21:29.341 "name": "pt2", 00:21:29.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:29.341 "is_configured": true, 00:21:29.341 "data_offset": 2048, 00:21:29.341 "data_size": 63488 00:21:29.341 }, 00:21:29.341 { 00:21:29.341 "name": null, 00:21:29.341 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:29.341 "is_configured": false, 00:21:29.341 "data_offset": 2048, 00:21:29.341 "data_size": 63488 00:21:29.341 } 00:21:29.341 ] 00:21:29.341 }' 00:21:29.341 11:03:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:29.341 11:03:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.279 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:21:30.279 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:30.279 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:21:30.279 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:30.848 [2024-07-25 11:03:37.820213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:30.848 [2024-07-25 11:03:37.820276] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:30.848 [2024-07-25 11:03:37.820303] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:21:30.848 [2024-07-25 11:03:37.820318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:30.848 [2024-07-25 11:03:37.820883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:30.848 [2024-07-25 11:03:37.820907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:30.848 [2024-07-25 11:03:37.821004] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:30.848 [2024-07-25 11:03:37.821031] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:30.848 [2024-07-25 11:03:37.821199] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:30.848 [2024-07-25 11:03:37.821214] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:30.848 [2024-07-25 11:03:37.821519] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:21:30.848 [2024-07-25 11:03:37.821771] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:30.848 [2024-07-25 11:03:37.821790] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:30.848 [2024-07-25 11:03:37.821971] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.848 pt3 00:21:30.848 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:30.848 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:30.848 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:30.848 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:30.848 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:30.848 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:30.848 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:30.848 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:30.848 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:30.848 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:30.848 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.848 11:03:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.108 11:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.108 "name": "raid_bdev1", 00:21:31.108 "uuid": "55c2dafb-4919-4bc4-833c-363beebdd0b5", 00:21:31.108 "strip_size_kb": 0, 00:21:31.108 "state": "online", 00:21:31.108 "raid_level": "raid1", 00:21:31.108 "superblock": true, 00:21:31.108 "num_base_bdevs": 3, 00:21:31.108 "num_base_bdevs_discovered": 2, 00:21:31.108 "num_base_bdevs_operational": 2, 00:21:31.108 "base_bdevs_list": [ 00:21:31.108 { 00:21:31.108 "name": null, 00:21:31.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.108 "is_configured": false, 00:21:31.108 "data_offset": 2048, 00:21:31.108 "data_size": 63488 00:21:31.108 }, 00:21:31.108 { 00:21:31.108 "name": "pt2", 00:21:31.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:31.108 "is_configured": true, 00:21:31.108 "data_offset": 2048, 00:21:31.108 "data_size": 63488 00:21:31.108 }, 00:21:31.108 { 00:21:31.108 "name": "pt3", 00:21:31.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:31.108 "is_configured": true, 00:21:31.108 "data_offset": 2048, 00:21:31.108 "data_size": 63488 00:21:31.108 } 00:21:31.108 ] 00:21:31.108 }' 00:21:31.108 11:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.108 11:03:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:31.686 11:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:31.686 11:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:31.950 11:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:21:31.950 11:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:31.950 11:03:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:21:32.210 [2024-07-25 11:03:39.071916] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:32.210 11:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 55c2dafb-4919-4bc4-833c-363beebdd0b5 '!=' 55c2dafb-4919-4bc4-833c-363beebdd0b5 ']' 00:21:32.210 11:03:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3627545 00:21:32.210 11:03:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3627545 ']' 00:21:32.210 11:03:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3627545 00:21:32.210 11:03:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:21:32.210 11:03:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:32.210 11:03:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3627545 00:21:32.210 11:03:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:32.210 11:03:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:32.210 11:03:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3627545' 00:21:32.210 killing process with pid 3627545 00:21:32.210 11:03:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3627545 00:21:32.210 [2024-07-25 11:03:39.147790] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:32.210 [2024-07-25 11:03:39.147883] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:32.210 11:03:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3627545 00:21:32.210 [2024-07-25 11:03:39.147953] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:32.210 [2024-07-25 11:03:39.147972] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:32.470 [2024-07-25 11:03:39.468797] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:34.376 11:03:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:21:34.376 00:21:34.376 real 0m23.826s 00:21:34.376 user 0m41.906s 00:21:34.377 sys 0m3.935s 00:21:34.377 11:03:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:34.377 11:03:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.377 ************************************ 00:21:34.377 END TEST raid_superblock_test 00:21:34.377 ************************************ 00:21:34.377 11:03:41 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:21:34.377 11:03:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:34.377 11:03:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:34.377 11:03:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:34.377 ************************************ 00:21:34.377 START TEST raid_read_error_test 00:21:34.377 ************************************ 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.busmw75pl1 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3631954 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3631954 /var/tmp/spdk-raid.sock 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3631954 ']' 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:34.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.377 11:03:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:34.377 [2024-07-25 11:03:41.335409] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:34.377 [2024-07-25 11:03:41.335529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631954 ] 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:34.377 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:34.377 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:34.673 [2024-07-25 11:03:41.559454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.932 [2024-07-25 11:03:41.837983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.192 [2024-07-25 11:03:42.177189] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.192 [2024-07-25 11:03:42.177224] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:35.452 11:03:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.452 11:03:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:35.452 11:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:35.452 11:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:35.711 BaseBdev1_malloc 00:21:35.711 11:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:35.970 true 00:21:35.970 11:03:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:36.230 [2024-07-25 11:03:43.090905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:36.230 [2024-07-25 11:03:43.090967] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.230 [2024-07-25 11:03:43.090993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:21:36.230 [2024-07-25 11:03:43.091015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.230 [2024-07-25 11:03:43.093817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.230 [2024-07-25 11:03:43.093855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:36.230 BaseBdev1 00:21:36.230 11:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:36.230 11:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:36.490 BaseBdev2_malloc 00:21:36.490 11:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:36.490 true 00:21:36.748 11:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:36.748 [2024-07-25 11:03:43.819759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:36.748 [2024-07-25 11:03:43.819818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.748 [2024-07-25 11:03:43.819844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:21:36.748 [2024-07-25 11:03:43.819864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.748 [2024-07-25 11:03:43.822630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.748 [2024-07-25 11:03:43.822668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:36.748 BaseBdev2 00:21:36.748 11:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:36.748 11:03:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:37.007 BaseBdev3_malloc 00:21:37.007 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:37.266 true 00:21:37.266 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:37.525 [2024-07-25 11:03:44.554984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:37.525 [2024-07-25 11:03:44.555045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:37.525 [2024-07-25 11:03:44.555072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:21:37.525 [2024-07-25 11:03:44.555094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:37.525 [2024-07-25 11:03:44.557881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:37.525 [2024-07-25 11:03:44.557917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:37.525 BaseBdev3 00:21:37.525 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:37.785 [2024-07-25 11:03:44.779616] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:37.785 [2024-07-25 11:03:44.781967] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:37.785 [2024-07-25 11:03:44.782057] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:37.785 [2024-07-25 11:03:44.782323] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:37.785 [2024-07-25 11:03:44.782340] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:37.785 [2024-07-25 11:03:44.782685] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:21:37.785 [2024-07-25 11:03:44.782936] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:37.785 [2024-07-25 11:03:44.782962] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:37.785 [2024-07-25 11:03:44.783206] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:37.785 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:37.785 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:37.785 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:37.785 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:37.785 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:37.785 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:37.785 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.785 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.785 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.785 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.785 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.785 11:03:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.044 11:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.044 "name": "raid_bdev1", 00:21:38.044 "uuid": "43fb5f43-eae8-4865-8197-01fd24ce4995", 00:21:38.044 "strip_size_kb": 0, 00:21:38.044 "state": "online", 00:21:38.044 "raid_level": "raid1", 00:21:38.044 "superblock": true, 00:21:38.044 "num_base_bdevs": 3, 00:21:38.044 "num_base_bdevs_discovered": 3, 00:21:38.044 "num_base_bdevs_operational": 3, 00:21:38.044 "base_bdevs_list": [ 00:21:38.044 { 00:21:38.044 "name": "BaseBdev1", 00:21:38.044 "uuid": "9ceb65c5-30ed-5948-8e19-b0bf5e97b9f5", 00:21:38.044 "is_configured": true, 00:21:38.044 "data_offset": 2048, 00:21:38.044 "data_size": 63488 00:21:38.044 }, 00:21:38.044 { 00:21:38.044 "name": "BaseBdev2", 00:21:38.044 "uuid": "b4cf38f7-d5cf-5788-9663-ff95c6703e29", 00:21:38.044 "is_configured": true, 00:21:38.044 "data_offset": 2048, 00:21:38.044 "data_size": 63488 00:21:38.044 }, 00:21:38.044 { 00:21:38.044 "name": "BaseBdev3", 00:21:38.044 "uuid": "aec39d07-feba-5ba7-9673-d41ee8b2efc7", 00:21:38.044 "is_configured": true, 00:21:38.044 "data_offset": 2048, 00:21:38.044 "data_size": 63488 00:21:38.044 } 00:21:38.044 ] 00:21:38.044 }' 00:21:38.044 11:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.044 11:03:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:38.613 11:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:21:38.613 11:03:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:38.613 [2024-07-25 11:03:45.700073] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:21:39.551 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:39.810 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:21:39.810 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:39.810 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:21:39.810 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=3 00:21:39.811 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:39.811 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:39.811 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:39.811 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:39.811 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:39.811 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:39.811 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:39.811 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:39.811 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:39.811 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:39.811 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.811 11:03:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.070 11:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:40.070 "name": "raid_bdev1", 00:21:40.070 "uuid": "43fb5f43-eae8-4865-8197-01fd24ce4995", 00:21:40.070 "strip_size_kb": 0, 00:21:40.070 "state": "online", 00:21:40.070 "raid_level": "raid1", 00:21:40.070 "superblock": true, 00:21:40.070 "num_base_bdevs": 3, 00:21:40.070 "num_base_bdevs_discovered": 3, 00:21:40.070 "num_base_bdevs_operational": 3, 00:21:40.070 "base_bdevs_list": [ 00:21:40.070 { 00:21:40.070 "name": "BaseBdev1", 00:21:40.070 "uuid": "9ceb65c5-30ed-5948-8e19-b0bf5e97b9f5", 00:21:40.070 "is_configured": true, 00:21:40.070 "data_offset": 2048, 00:21:40.070 "data_size": 63488 00:21:40.070 }, 00:21:40.070 { 00:21:40.070 "name": "BaseBdev2", 00:21:40.070 "uuid": "b4cf38f7-d5cf-5788-9663-ff95c6703e29", 00:21:40.070 "is_configured": true, 00:21:40.070 "data_offset": 2048, 00:21:40.070 "data_size": 63488 00:21:40.070 }, 00:21:40.070 { 00:21:40.070 "name": "BaseBdev3", 00:21:40.070 "uuid": "aec39d07-feba-5ba7-9673-d41ee8b2efc7", 00:21:40.070 "is_configured": true, 00:21:40.070 "data_offset": 2048, 00:21:40.070 "data_size": 63488 00:21:40.070 } 00:21:40.070 ] 00:21:40.070 }' 00:21:40.070 11:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:40.070 11:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:40.639 11:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:40.899 [2024-07-25 11:03:47.830608] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:40.899 [2024-07-25 11:03:47.830648] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.899 [2024-07-25 11:03:47.833896] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.899 [2024-07-25 11:03:47.833950] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:40.899 [2024-07-25 11:03:47.834073] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.899 [2024-07-25 11:03:47.834089] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:40.899 0 00:21:40.899 11:03:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3631954 00:21:40.899 11:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3631954 ']' 00:21:40.899 11:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3631954 00:21:40.899 11:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:21:40.899 11:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.899 11:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3631954 00:21:40.899 11:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:40.899 11:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:40.899 11:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3631954' 00:21:40.899 killing process with pid 3631954 00:21:40.899 11:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3631954 00:21:40.899 [2024-07-25 11:03:47.908677] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:40.899 11:03:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3631954 00:21:41.158 [2024-07-25 11:03:48.145394] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:43.065 11:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.busmw75pl1 00:21:43.065 11:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:21:43.065 11:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:21:43.065 11:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:21:43.065 11:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:21:43.065 11:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:43.065 11:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:43.065 11:03:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:43.065 00:21:43.065 real 0m8.708s 00:21:43.065 user 0m12.342s 00:21:43.065 sys 0m1.323s 00:21:43.065 11:03:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:43.065 11:03:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.065 ************************************ 00:21:43.065 END TEST raid_read_error_test 00:21:43.065 ************************************ 00:21:43.065 11:03:49 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:21:43.065 11:03:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:43.065 11:03:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:43.065 11:03:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:43.065 ************************************ 00:21:43.065 START TEST raid_write_error_test 00:21:43.065 ************************************ 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=3 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.w3kZc1ph3b 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3633415 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3633415 /var/tmp/spdk-raid.sock 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3633415 ']' 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:43.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.065 11:03:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.065 [2024-07-25 11:03:50.136387] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:43.065 [2024-07-25 11:03:50.136511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633415 ] 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:43.325 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:43.325 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:43.325 [2024-07-25 11:03:50.364839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.585 [2024-07-25 11:03:50.648118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.153 [2024-07-25 11:03:50.993472] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:44.153 [2024-07-25 11:03:50.993527] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:44.153 11:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.153 11:03:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:21:44.153 11:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:44.153 11:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:44.412 BaseBdev1_malloc 00:21:44.412 11:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:44.671 true 00:21:44.671 11:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:44.929 [2024-07-25 11:03:51.894698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:44.929 [2024-07-25 11:03:51.894760] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:44.929 [2024-07-25 11:03:51.894788] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:21:44.929 [2024-07-25 11:03:51.894810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:44.929 [2024-07-25 11:03:51.897582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:44.929 [2024-07-25 11:03:51.897622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:44.929 BaseBdev1 00:21:44.929 11:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:44.929 11:03:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:45.188 BaseBdev2_malloc 00:21:45.188 11:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:45.446 true 00:21:45.446 11:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:45.704 [2024-07-25 11:03:52.625443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:45.704 [2024-07-25 11:03:52.625505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:45.704 [2024-07-25 11:03:52.625528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:21:45.704 [2024-07-25 11:03:52.625549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:45.704 [2024-07-25 11:03:52.628266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:45.704 [2024-07-25 11:03:52.628303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:45.704 BaseBdev2 00:21:45.704 11:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:21:45.704 11:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:45.963 BaseBdev3_malloc 00:21:45.963 11:03:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:46.222 true 00:21:46.222 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:46.479 [2024-07-25 11:03:53.348861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:46.479 [2024-07-25 11:03:53.348922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.479 [2024-07-25 11:03:53.348950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:21:46.479 [2024-07-25 11:03:53.348967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.479 [2024-07-25 11:03:53.351741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.479 [2024-07-25 11:03:53.351778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:46.479 BaseBdev3 00:21:46.480 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:46.480 [2024-07-25 11:03:53.573491] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:46.480 [2024-07-25 11:03:53.575848] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:46.480 [2024-07-25 11:03:53.575937] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:46.480 [2024-07-25 11:03:53.576210] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:46.480 [2024-07-25 11:03:53.576229] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:46.480 [2024-07-25 11:03:53.576571] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:21:46.480 [2024-07-25 11:03:53.576825] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:46.480 [2024-07-25 11:03:53.576857] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:46.480 [2024-07-25 11:03:53.577098] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.480 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:46.480 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:46.480 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:46.480 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:46.480 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:46.480 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:46.480 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.480 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.737 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.738 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.738 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.738 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.738 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.738 "name": "raid_bdev1", 00:21:46.738 "uuid": "1b8efd11-f72d-4cff-ae79-34c45392ef1c", 00:21:46.738 "strip_size_kb": 0, 00:21:46.738 "state": "online", 00:21:46.738 "raid_level": "raid1", 00:21:46.738 "superblock": true, 00:21:46.738 "num_base_bdevs": 3, 00:21:46.738 "num_base_bdevs_discovered": 3, 00:21:46.738 "num_base_bdevs_operational": 3, 00:21:46.738 "base_bdevs_list": [ 00:21:46.738 { 00:21:46.738 "name": "BaseBdev1", 00:21:46.738 "uuid": "584162eb-eeb4-514c-994c-9797c30cada6", 00:21:46.738 "is_configured": true, 00:21:46.738 "data_offset": 2048, 00:21:46.738 "data_size": 63488 00:21:46.738 }, 00:21:46.738 { 00:21:46.738 "name": "BaseBdev2", 00:21:46.738 "uuid": "37e8d5e6-31fb-51f6-a227-f6c70f9fd945", 00:21:46.738 "is_configured": true, 00:21:46.738 "data_offset": 2048, 00:21:46.738 "data_size": 63488 00:21:46.738 }, 00:21:46.738 { 00:21:46.738 "name": "BaseBdev3", 00:21:46.738 "uuid": "d1eccae6-01a2-555c-bcf1-b0b993de04bd", 00:21:46.738 "is_configured": true, 00:21:46.738 "data_offset": 2048, 00:21:46.738 "data_size": 63488 00:21:46.738 } 00:21:46.738 ] 00:21:46.738 }' 00:21:46.738 11:03:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.738 11:03:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:47.306 11:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:21:47.306 11:03:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:47.565 [2024-07-25 11:03:54.510041] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:21:48.542 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:48.542 [2024-07-25 11:03:55.624778] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:21:48.542 [2024-07-25 11:03:55.624849] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:48.542 [2024-07-25 11:03:55.625088] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000107e0 00:21:48.542 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:21:48.542 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:48.542 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:21:48.542 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=2 00:21:48.542 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:48.542 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:48.542 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:48.542 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:48.543 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:48.543 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:48.543 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:48.543 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:48.543 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:48.543 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:48.543 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.543 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.802 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.802 "name": "raid_bdev1", 00:21:48.802 "uuid": "1b8efd11-f72d-4cff-ae79-34c45392ef1c", 00:21:48.802 "strip_size_kb": 0, 00:21:48.802 "state": "online", 00:21:48.802 "raid_level": "raid1", 00:21:48.802 "superblock": true, 00:21:48.802 "num_base_bdevs": 3, 00:21:48.802 "num_base_bdevs_discovered": 2, 00:21:48.802 "num_base_bdevs_operational": 2, 00:21:48.802 "base_bdevs_list": [ 00:21:48.802 { 00:21:48.802 "name": null, 00:21:48.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.802 "is_configured": false, 00:21:48.802 "data_offset": 2048, 00:21:48.802 "data_size": 63488 00:21:48.802 }, 00:21:48.802 { 00:21:48.802 "name": "BaseBdev2", 00:21:48.802 "uuid": "37e8d5e6-31fb-51f6-a227-f6c70f9fd945", 00:21:48.802 "is_configured": true, 00:21:48.802 "data_offset": 2048, 00:21:48.802 "data_size": 63488 00:21:48.802 }, 00:21:48.802 { 00:21:48.802 "name": "BaseBdev3", 00:21:48.802 "uuid": "d1eccae6-01a2-555c-bcf1-b0b993de04bd", 00:21:48.802 "is_configured": true, 00:21:48.802 "data_offset": 2048, 00:21:48.802 "data_size": 63488 00:21:48.802 } 00:21:48.802 ] 00:21:48.802 }' 00:21:48.802 11:03:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.802 11:03:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:49.371 11:03:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:49.631 [2024-07-25 11:03:56.655068] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:49.631 [2024-07-25 11:03:56.655116] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:49.631 [2024-07-25 11:03:56.658387] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.631 [2024-07-25 11:03:56.658443] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:49.631 [2024-07-25 11:03:56.658543] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:49.631 [2024-07-25 11:03:56.658563] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:49.631 0 00:21:49.631 11:03:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3633415 00:21:49.631 11:03:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3633415 ']' 00:21:49.631 11:03:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3633415 00:21:49.631 11:03:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:21:49.631 11:03:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:49.631 11:03:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3633415 00:21:49.631 11:03:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:49.631 11:03:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:49.631 11:03:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3633415' 00:21:49.631 killing process with pid 3633415 00:21:49.631 11:03:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3633415 00:21:49.631 [2024-07-25 11:03:56.730681] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:49.631 11:03:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3633415 00:21:49.890 [2024-07-25 11:03:56.959695] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:51.797 11:03:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:21:51.797 11:03:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.w3kZc1ph3b 00:21:51.797 11:03:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:21:51.797 11:03:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:21:51.797 11:03:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:21:51.797 11:03:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:51.797 11:03:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:51.797 11:03:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:51.797 00:21:51.797 real 0m8.697s 00:21:51.797 user 0m12.301s 00:21:51.797 sys 0m1.359s 00:21:51.797 11:03:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:51.797 11:03:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.797 ************************************ 00:21:51.797 END TEST raid_write_error_test 00:21:51.797 ************************************ 00:21:51.797 11:03:58 bdev_raid -- bdev/bdev_raid.sh@945 -- # for n in {2..4} 00:21:51.797 11:03:58 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:21:51.797 11:03:58 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:21:51.797 11:03:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:21:51.797 11:03:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:51.797 11:03:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:51.797 ************************************ 00:21:51.797 START TEST raid_state_function_test 00:21:51.797 ************************************ 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:51.797 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:51.798 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3635055 00:21:51.798 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3635055' 00:21:51.798 Process raid pid: 3635055 00:21:51.798 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3635055 /var/tmp/spdk-raid.sock 00:21:51.798 11:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3635055 ']' 00:21:51.798 11:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:51.798 11:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:51.798 11:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:51.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:51.798 11:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:51.798 11:03:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:51.798 11:03:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:51.798 [2024-07-25 11:03:58.901450] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:51.798 [2024-07-25 11:03:58.901562] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:01.0 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:01.1 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:01.2 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:01.3 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:01.4 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:01.5 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:01.6 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:01.7 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:02.0 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:02.1 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:02.2 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:02.3 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:02.4 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:02.5 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:02.6 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3d:02.7 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:01.0 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:01.1 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:01.2 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:01.3 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:01.4 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:01.5 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:01.6 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:01.7 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:02.0 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:02.1 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:02.2 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:02.3 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:02.4 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:02.5 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:02.6 cannot be used 00:21:52.057 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:21:52.057 EAL: Requested device 0000:3f:02.7 cannot be used 00:21:52.057 [2024-07-25 11:03:59.127686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.317 [2024-07-25 11:03:59.394080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.886 [2024-07-25 11:03:59.722580] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:52.886 [2024-07-25 11:03:59.722617] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:52.886 11:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:52.886 11:03:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:21:52.886 11:03:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:53.146 [2024-07-25 11:04:00.024496] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:53.146 [2024-07-25 11:04:00.024554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:53.146 [2024-07-25 11:04:00.024569] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:53.146 [2024-07-25 11:04:00.024585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:53.146 [2024-07-25 11:04:00.024596] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:53.146 [2024-07-25 11:04:00.024612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:53.146 [2024-07-25 11:04:00.024623] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:53.146 [2024-07-25 11:04:00.024639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:53.146 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:53.146 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:53.146 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:53.146 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:53.146 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:53.146 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:53.146 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:53.146 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:53.146 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:53.146 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:53.146 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.146 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:53.405 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:53.405 "name": "Existed_Raid", 00:21:53.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.405 "strip_size_kb": 64, 00:21:53.405 "state": "configuring", 00:21:53.405 "raid_level": "raid0", 00:21:53.405 "superblock": false, 00:21:53.405 "num_base_bdevs": 4, 00:21:53.405 "num_base_bdevs_discovered": 0, 00:21:53.405 "num_base_bdevs_operational": 4, 00:21:53.405 "base_bdevs_list": [ 00:21:53.405 { 00:21:53.405 "name": "BaseBdev1", 00:21:53.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.405 "is_configured": false, 00:21:53.405 "data_offset": 0, 00:21:53.405 "data_size": 0 00:21:53.405 }, 00:21:53.405 { 00:21:53.406 "name": "BaseBdev2", 00:21:53.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.406 "is_configured": false, 00:21:53.406 "data_offset": 0, 00:21:53.406 "data_size": 0 00:21:53.406 }, 00:21:53.406 { 00:21:53.406 "name": "BaseBdev3", 00:21:53.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.406 "is_configured": false, 00:21:53.406 "data_offset": 0, 00:21:53.406 "data_size": 0 00:21:53.406 }, 00:21:53.406 { 00:21:53.406 "name": "BaseBdev4", 00:21:53.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:53.406 "is_configured": false, 00:21:53.406 "data_offset": 0, 00:21:53.406 "data_size": 0 00:21:53.406 } 00:21:53.406 ] 00:21:53.406 }' 00:21:53.406 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:53.406 11:04:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:53.972 11:04:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:53.972 [2024-07-25 11:04:01.047088] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:53.972 [2024-07-25 11:04:01.047131] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:53.972 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:54.229 [2024-07-25 11:04:01.271747] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:54.229 [2024-07-25 11:04:01.271794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:54.229 [2024-07-25 11:04:01.271807] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:54.229 [2024-07-25 11:04:01.271830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:54.229 [2024-07-25 11:04:01.271842] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:54.229 [2024-07-25 11:04:01.271857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:54.229 [2024-07-25 11:04:01.271869] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:54.229 [2024-07-25 11:04:01.271884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:54.229 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:54.487 [2024-07-25 11:04:01.560018] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.487 BaseBdev1 00:21:54.487 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:54.487 11:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:21:54.487 11:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:54.487 11:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:54.487 11:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:54.487 11:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:54.487 11:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:54.745 11:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:55.003 [ 00:21:55.003 { 00:21:55.003 "name": "BaseBdev1", 00:21:55.003 "aliases": [ 00:21:55.003 "59f2e942-9400-400f-bbf8-b9cf5df8d38c" 00:21:55.003 ], 00:21:55.003 "product_name": "Malloc disk", 00:21:55.003 "block_size": 512, 00:21:55.003 "num_blocks": 65536, 00:21:55.003 "uuid": "59f2e942-9400-400f-bbf8-b9cf5df8d38c", 00:21:55.003 "assigned_rate_limits": { 00:21:55.003 "rw_ios_per_sec": 0, 00:21:55.003 "rw_mbytes_per_sec": 0, 00:21:55.003 "r_mbytes_per_sec": 0, 00:21:55.003 "w_mbytes_per_sec": 0 00:21:55.003 }, 00:21:55.003 "claimed": true, 00:21:55.003 "claim_type": "exclusive_write", 00:21:55.003 "zoned": false, 00:21:55.003 "supported_io_types": { 00:21:55.003 "read": true, 00:21:55.003 "write": true, 00:21:55.003 "unmap": true, 00:21:55.003 "flush": true, 00:21:55.003 "reset": true, 00:21:55.003 "nvme_admin": false, 00:21:55.003 "nvme_io": false, 00:21:55.003 "nvme_io_md": false, 00:21:55.003 "write_zeroes": true, 00:21:55.003 "zcopy": true, 00:21:55.003 "get_zone_info": false, 00:21:55.003 "zone_management": false, 00:21:55.003 "zone_append": false, 00:21:55.003 "compare": false, 00:21:55.003 "compare_and_write": false, 00:21:55.003 "abort": true, 00:21:55.003 "seek_hole": false, 00:21:55.003 "seek_data": false, 00:21:55.003 "copy": true, 00:21:55.003 "nvme_iov_md": false 00:21:55.003 }, 00:21:55.003 "memory_domains": [ 00:21:55.003 { 00:21:55.003 "dma_device_id": "system", 00:21:55.003 "dma_device_type": 1 00:21:55.003 }, 00:21:55.003 { 00:21:55.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.003 "dma_device_type": 2 00:21:55.003 } 00:21:55.003 ], 00:21:55.003 "driver_specific": {} 00:21:55.003 } 00:21:55.003 ] 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.003 11:04:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.261 11:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:55.261 "name": "Existed_Raid", 00:21:55.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.261 "strip_size_kb": 64, 00:21:55.261 "state": "configuring", 00:21:55.261 "raid_level": "raid0", 00:21:55.261 "superblock": false, 00:21:55.261 "num_base_bdevs": 4, 00:21:55.261 "num_base_bdevs_discovered": 1, 00:21:55.261 "num_base_bdevs_operational": 4, 00:21:55.261 "base_bdevs_list": [ 00:21:55.261 { 00:21:55.261 "name": "BaseBdev1", 00:21:55.261 "uuid": "59f2e942-9400-400f-bbf8-b9cf5df8d38c", 00:21:55.261 "is_configured": true, 00:21:55.261 "data_offset": 0, 00:21:55.261 "data_size": 65536 00:21:55.261 }, 00:21:55.261 { 00:21:55.261 "name": "BaseBdev2", 00:21:55.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.261 "is_configured": false, 00:21:55.261 "data_offset": 0, 00:21:55.261 "data_size": 0 00:21:55.261 }, 00:21:55.261 { 00:21:55.261 "name": "BaseBdev3", 00:21:55.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.261 "is_configured": false, 00:21:55.261 "data_offset": 0, 00:21:55.261 "data_size": 0 00:21:55.261 }, 00:21:55.261 { 00:21:55.261 "name": "BaseBdev4", 00:21:55.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.261 "is_configured": false, 00:21:55.261 "data_offset": 0, 00:21:55.261 "data_size": 0 00:21:55.261 } 00:21:55.261 ] 00:21:55.261 }' 00:21:55.261 11:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:55.261 11:04:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:55.828 11:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:56.086 [2024-07-25 11:04:02.975898] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:56.086 [2024-07-25 11:04:02.975951] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:56.086 11:04:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:56.086 [2024-07-25 11:04:03.204601] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:56.345 [2024-07-25 11:04:03.206934] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:56.345 [2024-07-25 11:04:03.206978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:56.345 [2024-07-25 11:04:03.206992] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:56.345 [2024-07-25 11:04:03.207008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:56.345 [2024-07-25 11:04:03.207020] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:56.345 [2024-07-25 11:04:03.207038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.345 "name": "Existed_Raid", 00:21:56.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.345 "strip_size_kb": 64, 00:21:56.345 "state": "configuring", 00:21:56.345 "raid_level": "raid0", 00:21:56.345 "superblock": false, 00:21:56.345 "num_base_bdevs": 4, 00:21:56.345 "num_base_bdevs_discovered": 1, 00:21:56.345 "num_base_bdevs_operational": 4, 00:21:56.345 "base_bdevs_list": [ 00:21:56.345 { 00:21:56.345 "name": "BaseBdev1", 00:21:56.345 "uuid": "59f2e942-9400-400f-bbf8-b9cf5df8d38c", 00:21:56.345 "is_configured": true, 00:21:56.345 "data_offset": 0, 00:21:56.345 "data_size": 65536 00:21:56.345 }, 00:21:56.345 { 00:21:56.345 "name": "BaseBdev2", 00:21:56.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.345 "is_configured": false, 00:21:56.345 "data_offset": 0, 00:21:56.345 "data_size": 0 00:21:56.345 }, 00:21:56.345 { 00:21:56.345 "name": "BaseBdev3", 00:21:56.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.345 "is_configured": false, 00:21:56.345 "data_offset": 0, 00:21:56.345 "data_size": 0 00:21:56.345 }, 00:21:56.345 { 00:21:56.345 "name": "BaseBdev4", 00:21:56.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.345 "is_configured": false, 00:21:56.345 "data_offset": 0, 00:21:56.345 "data_size": 0 00:21:56.345 } 00:21:56.345 ] 00:21:56.345 }' 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.345 11:04:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.912 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:57.170 [2024-07-25 11:04:04.224461] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:57.170 BaseBdev2 00:21:57.170 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:57.170 11:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:21:57.170 11:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:57.170 11:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:57.170 11:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:57.170 11:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:57.170 11:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:57.429 11:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:57.688 [ 00:21:57.688 { 00:21:57.688 "name": "BaseBdev2", 00:21:57.688 "aliases": [ 00:21:57.688 "251f607a-67e2-482f-abbd-ddef205a636a" 00:21:57.688 ], 00:21:57.688 "product_name": "Malloc disk", 00:21:57.688 "block_size": 512, 00:21:57.688 "num_blocks": 65536, 00:21:57.688 "uuid": "251f607a-67e2-482f-abbd-ddef205a636a", 00:21:57.688 "assigned_rate_limits": { 00:21:57.688 "rw_ios_per_sec": 0, 00:21:57.688 "rw_mbytes_per_sec": 0, 00:21:57.688 "r_mbytes_per_sec": 0, 00:21:57.688 "w_mbytes_per_sec": 0 00:21:57.688 }, 00:21:57.688 "claimed": true, 00:21:57.688 "claim_type": "exclusive_write", 00:21:57.688 "zoned": false, 00:21:57.688 "supported_io_types": { 00:21:57.688 "read": true, 00:21:57.688 "write": true, 00:21:57.688 "unmap": true, 00:21:57.688 "flush": true, 00:21:57.688 "reset": true, 00:21:57.688 "nvme_admin": false, 00:21:57.688 "nvme_io": false, 00:21:57.688 "nvme_io_md": false, 00:21:57.688 "write_zeroes": true, 00:21:57.688 "zcopy": true, 00:21:57.688 "get_zone_info": false, 00:21:57.688 "zone_management": false, 00:21:57.688 "zone_append": false, 00:21:57.688 "compare": false, 00:21:57.688 "compare_and_write": false, 00:21:57.688 "abort": true, 00:21:57.688 "seek_hole": false, 00:21:57.688 "seek_data": false, 00:21:57.688 "copy": true, 00:21:57.688 "nvme_iov_md": false 00:21:57.688 }, 00:21:57.688 "memory_domains": [ 00:21:57.688 { 00:21:57.688 "dma_device_id": "system", 00:21:57.688 "dma_device_type": 1 00:21:57.688 }, 00:21:57.688 { 00:21:57.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.688 "dma_device_type": 2 00:21:57.688 } 00:21:57.688 ], 00:21:57.688 "driver_specific": {} 00:21:57.688 } 00:21:57.688 ] 00:21:57.688 11:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:57.688 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:57.688 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:57.688 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:57.688 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:57.688 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:57.688 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:57.688 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:57.689 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:57.689 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.689 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.689 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.689 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.689 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.689 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.948 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:57.948 "name": "Existed_Raid", 00:21:57.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.948 "strip_size_kb": 64, 00:21:57.948 "state": "configuring", 00:21:57.948 "raid_level": "raid0", 00:21:57.948 "superblock": false, 00:21:57.948 "num_base_bdevs": 4, 00:21:57.948 "num_base_bdevs_discovered": 2, 00:21:57.948 "num_base_bdevs_operational": 4, 00:21:57.948 "base_bdevs_list": [ 00:21:57.948 { 00:21:57.948 "name": "BaseBdev1", 00:21:57.948 "uuid": "59f2e942-9400-400f-bbf8-b9cf5df8d38c", 00:21:57.948 "is_configured": true, 00:21:57.948 "data_offset": 0, 00:21:57.948 "data_size": 65536 00:21:57.948 }, 00:21:57.948 { 00:21:57.948 "name": "BaseBdev2", 00:21:57.948 "uuid": "251f607a-67e2-482f-abbd-ddef205a636a", 00:21:57.948 "is_configured": true, 00:21:57.948 "data_offset": 0, 00:21:57.948 "data_size": 65536 00:21:57.948 }, 00:21:57.948 { 00:21:57.948 "name": "BaseBdev3", 00:21:57.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.948 "is_configured": false, 00:21:57.948 "data_offset": 0, 00:21:57.948 "data_size": 0 00:21:57.948 }, 00:21:57.948 { 00:21:57.948 "name": "BaseBdev4", 00:21:57.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.948 "is_configured": false, 00:21:57.948 "data_offset": 0, 00:21:57.948 "data_size": 0 00:21:57.948 } 00:21:57.948 ] 00:21:57.948 }' 00:21:57.948 11:04:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:57.948 11:04:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:58.517 11:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:58.776 [2024-07-25 11:04:05.688511] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:58.776 BaseBdev3 00:21:58.776 11:04:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:58.776 11:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:21:58.776 11:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:21:58.776 11:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:21:58.776 11:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:21:58.776 11:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:21:58.776 11:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:59.034 11:04:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:59.035 [ 00:21:59.035 { 00:21:59.035 "name": "BaseBdev3", 00:21:59.035 "aliases": [ 00:21:59.035 "321cc34d-3dfc-4550-91e3-871328f0e79c" 00:21:59.035 ], 00:21:59.035 "product_name": "Malloc disk", 00:21:59.035 "block_size": 512, 00:21:59.035 "num_blocks": 65536, 00:21:59.035 "uuid": "321cc34d-3dfc-4550-91e3-871328f0e79c", 00:21:59.035 "assigned_rate_limits": { 00:21:59.035 "rw_ios_per_sec": 0, 00:21:59.035 "rw_mbytes_per_sec": 0, 00:21:59.035 "r_mbytes_per_sec": 0, 00:21:59.035 "w_mbytes_per_sec": 0 00:21:59.035 }, 00:21:59.035 "claimed": true, 00:21:59.035 "claim_type": "exclusive_write", 00:21:59.035 "zoned": false, 00:21:59.035 "supported_io_types": { 00:21:59.035 "read": true, 00:21:59.035 "write": true, 00:21:59.035 "unmap": true, 00:21:59.035 "flush": true, 00:21:59.035 "reset": true, 00:21:59.035 "nvme_admin": false, 00:21:59.035 "nvme_io": false, 00:21:59.035 "nvme_io_md": false, 00:21:59.035 "write_zeroes": true, 00:21:59.035 "zcopy": true, 00:21:59.035 "get_zone_info": false, 00:21:59.035 "zone_management": false, 00:21:59.035 "zone_append": false, 00:21:59.035 "compare": false, 00:21:59.035 "compare_and_write": false, 00:21:59.035 "abort": true, 00:21:59.035 "seek_hole": false, 00:21:59.035 "seek_data": false, 00:21:59.035 "copy": true, 00:21:59.035 "nvme_iov_md": false 00:21:59.035 }, 00:21:59.035 "memory_domains": [ 00:21:59.035 { 00:21:59.035 "dma_device_id": "system", 00:21:59.035 "dma_device_type": 1 00:21:59.035 }, 00:21:59.035 { 00:21:59.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.035 "dma_device_type": 2 00:21:59.035 } 00:21:59.035 ], 00:21:59.035 "driver_specific": {} 00:21:59.035 } 00:21:59.035 ] 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.035 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.294 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:59.294 "name": "Existed_Raid", 00:21:59.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.294 "strip_size_kb": 64, 00:21:59.294 "state": "configuring", 00:21:59.294 "raid_level": "raid0", 00:21:59.294 "superblock": false, 00:21:59.294 "num_base_bdevs": 4, 00:21:59.294 "num_base_bdevs_discovered": 3, 00:21:59.294 "num_base_bdevs_operational": 4, 00:21:59.294 "base_bdevs_list": [ 00:21:59.294 { 00:21:59.294 "name": "BaseBdev1", 00:21:59.294 "uuid": "59f2e942-9400-400f-bbf8-b9cf5df8d38c", 00:21:59.294 "is_configured": true, 00:21:59.294 "data_offset": 0, 00:21:59.294 "data_size": 65536 00:21:59.294 }, 00:21:59.294 { 00:21:59.294 "name": "BaseBdev2", 00:21:59.294 "uuid": "251f607a-67e2-482f-abbd-ddef205a636a", 00:21:59.294 "is_configured": true, 00:21:59.294 "data_offset": 0, 00:21:59.294 "data_size": 65536 00:21:59.294 }, 00:21:59.294 { 00:21:59.294 "name": "BaseBdev3", 00:21:59.294 "uuid": "321cc34d-3dfc-4550-91e3-871328f0e79c", 00:21:59.294 "is_configured": true, 00:21:59.294 "data_offset": 0, 00:21:59.294 "data_size": 65536 00:21:59.294 }, 00:21:59.294 { 00:21:59.294 "name": "BaseBdev4", 00:21:59.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.294 "is_configured": false, 00:21:59.294 "data_offset": 0, 00:21:59.294 "data_size": 0 00:21:59.294 } 00:21:59.294 ] 00:21:59.294 }' 00:21:59.294 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:59.294 11:04:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:59.862 11:04:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:00.121 [2024-07-25 11:04:07.202176] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:00.121 [2024-07-25 11:04:07.202222] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:00.121 [2024-07-25 11:04:07.202236] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:00.121 [2024-07-25 11:04:07.202567] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:22:00.121 [2024-07-25 11:04:07.202817] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:00.121 [2024-07-25 11:04:07.202835] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:00.121 [2024-07-25 11:04:07.203123] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:00.121 BaseBdev4 00:22:00.121 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:00.121 11:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:00.121 11:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:00.121 11:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:00.121 11:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:00.121 11:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:00.121 11:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:00.380 11:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:00.640 [ 00:22:00.640 { 00:22:00.640 "name": "BaseBdev4", 00:22:00.640 "aliases": [ 00:22:00.640 "0459ab0e-1d93-493d-8032-0cf9b4418fe0" 00:22:00.640 ], 00:22:00.640 "product_name": "Malloc disk", 00:22:00.640 "block_size": 512, 00:22:00.640 "num_blocks": 65536, 00:22:00.640 "uuid": "0459ab0e-1d93-493d-8032-0cf9b4418fe0", 00:22:00.640 "assigned_rate_limits": { 00:22:00.640 "rw_ios_per_sec": 0, 00:22:00.640 "rw_mbytes_per_sec": 0, 00:22:00.640 "r_mbytes_per_sec": 0, 00:22:00.640 "w_mbytes_per_sec": 0 00:22:00.640 }, 00:22:00.640 "claimed": true, 00:22:00.640 "claim_type": "exclusive_write", 00:22:00.640 "zoned": false, 00:22:00.640 "supported_io_types": { 00:22:00.640 "read": true, 00:22:00.640 "write": true, 00:22:00.640 "unmap": true, 00:22:00.640 "flush": true, 00:22:00.640 "reset": true, 00:22:00.640 "nvme_admin": false, 00:22:00.640 "nvme_io": false, 00:22:00.640 "nvme_io_md": false, 00:22:00.640 "write_zeroes": true, 00:22:00.640 "zcopy": true, 00:22:00.640 "get_zone_info": false, 00:22:00.640 "zone_management": false, 00:22:00.640 "zone_append": false, 00:22:00.640 "compare": false, 00:22:00.640 "compare_and_write": false, 00:22:00.640 "abort": true, 00:22:00.640 "seek_hole": false, 00:22:00.640 "seek_data": false, 00:22:00.640 "copy": true, 00:22:00.640 "nvme_iov_md": false 00:22:00.640 }, 00:22:00.640 "memory_domains": [ 00:22:00.640 { 00:22:00.640 "dma_device_id": "system", 00:22:00.640 "dma_device_type": 1 00:22:00.640 }, 00:22:00.640 { 00:22:00.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.640 "dma_device_type": 2 00:22:00.640 } 00:22:00.640 ], 00:22:00.640 "driver_specific": {} 00:22:00.640 } 00:22:00.640 ] 00:22:00.640 11:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:00.640 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:00.640 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:00.640 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:00.640 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:00.640 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:00.640 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:00.640 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:00.640 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:00.640 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.640 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.641 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.641 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.641 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.641 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:00.900 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:00.900 "name": "Existed_Raid", 00:22:00.900 "uuid": "4f28542c-a4dc-4f9a-98e3-74a6a0bc06ea", 00:22:00.900 "strip_size_kb": 64, 00:22:00.900 "state": "online", 00:22:00.900 "raid_level": "raid0", 00:22:00.900 "superblock": false, 00:22:00.900 "num_base_bdevs": 4, 00:22:00.900 "num_base_bdevs_discovered": 4, 00:22:00.900 "num_base_bdevs_operational": 4, 00:22:00.900 "base_bdevs_list": [ 00:22:00.900 { 00:22:00.900 "name": "BaseBdev1", 00:22:00.900 "uuid": "59f2e942-9400-400f-bbf8-b9cf5df8d38c", 00:22:00.900 "is_configured": true, 00:22:00.900 "data_offset": 0, 00:22:00.900 "data_size": 65536 00:22:00.900 }, 00:22:00.900 { 00:22:00.900 "name": "BaseBdev2", 00:22:00.900 "uuid": "251f607a-67e2-482f-abbd-ddef205a636a", 00:22:00.900 "is_configured": true, 00:22:00.900 "data_offset": 0, 00:22:00.900 "data_size": 65536 00:22:00.900 }, 00:22:00.900 { 00:22:00.900 "name": "BaseBdev3", 00:22:00.900 "uuid": "321cc34d-3dfc-4550-91e3-871328f0e79c", 00:22:00.900 "is_configured": true, 00:22:00.900 "data_offset": 0, 00:22:00.900 "data_size": 65536 00:22:00.900 }, 00:22:00.900 { 00:22:00.900 "name": "BaseBdev4", 00:22:00.900 "uuid": "0459ab0e-1d93-493d-8032-0cf9b4418fe0", 00:22:00.900 "is_configured": true, 00:22:00.900 "data_offset": 0, 00:22:00.900 "data_size": 65536 00:22:00.900 } 00:22:00.900 ] 00:22:00.900 }' 00:22:00.900 11:04:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:00.900 11:04:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:01.469 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:01.469 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:01.469 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:01.469 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:01.469 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:01.469 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:01.469 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:01.469 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:01.728 [2024-07-25 11:04:08.698677] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:01.728 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:01.728 "name": "Existed_Raid", 00:22:01.728 "aliases": [ 00:22:01.728 "4f28542c-a4dc-4f9a-98e3-74a6a0bc06ea" 00:22:01.728 ], 00:22:01.728 "product_name": "Raid Volume", 00:22:01.728 "block_size": 512, 00:22:01.728 "num_blocks": 262144, 00:22:01.728 "uuid": "4f28542c-a4dc-4f9a-98e3-74a6a0bc06ea", 00:22:01.728 "assigned_rate_limits": { 00:22:01.728 "rw_ios_per_sec": 0, 00:22:01.728 "rw_mbytes_per_sec": 0, 00:22:01.728 "r_mbytes_per_sec": 0, 00:22:01.728 "w_mbytes_per_sec": 0 00:22:01.728 }, 00:22:01.728 "claimed": false, 00:22:01.728 "zoned": false, 00:22:01.728 "supported_io_types": { 00:22:01.728 "read": true, 00:22:01.728 "write": true, 00:22:01.728 "unmap": true, 00:22:01.728 "flush": true, 00:22:01.728 "reset": true, 00:22:01.728 "nvme_admin": false, 00:22:01.728 "nvme_io": false, 00:22:01.728 "nvme_io_md": false, 00:22:01.728 "write_zeroes": true, 00:22:01.728 "zcopy": false, 00:22:01.728 "get_zone_info": false, 00:22:01.728 "zone_management": false, 00:22:01.728 "zone_append": false, 00:22:01.728 "compare": false, 00:22:01.728 "compare_and_write": false, 00:22:01.728 "abort": false, 00:22:01.728 "seek_hole": false, 00:22:01.728 "seek_data": false, 00:22:01.728 "copy": false, 00:22:01.728 "nvme_iov_md": false 00:22:01.728 }, 00:22:01.728 "memory_domains": [ 00:22:01.728 { 00:22:01.728 "dma_device_id": "system", 00:22:01.728 "dma_device_type": 1 00:22:01.728 }, 00:22:01.728 { 00:22:01.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.728 "dma_device_type": 2 00:22:01.728 }, 00:22:01.728 { 00:22:01.728 "dma_device_id": "system", 00:22:01.728 "dma_device_type": 1 00:22:01.728 }, 00:22:01.728 { 00:22:01.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.728 "dma_device_type": 2 00:22:01.728 }, 00:22:01.728 { 00:22:01.728 "dma_device_id": "system", 00:22:01.729 "dma_device_type": 1 00:22:01.729 }, 00:22:01.729 { 00:22:01.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.729 "dma_device_type": 2 00:22:01.729 }, 00:22:01.729 { 00:22:01.729 "dma_device_id": "system", 00:22:01.729 "dma_device_type": 1 00:22:01.729 }, 00:22:01.729 { 00:22:01.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.729 "dma_device_type": 2 00:22:01.729 } 00:22:01.729 ], 00:22:01.729 "driver_specific": { 00:22:01.729 "raid": { 00:22:01.729 "uuid": "4f28542c-a4dc-4f9a-98e3-74a6a0bc06ea", 00:22:01.729 "strip_size_kb": 64, 00:22:01.729 "state": "online", 00:22:01.729 "raid_level": "raid0", 00:22:01.729 "superblock": false, 00:22:01.729 "num_base_bdevs": 4, 00:22:01.729 "num_base_bdevs_discovered": 4, 00:22:01.729 "num_base_bdevs_operational": 4, 00:22:01.729 "base_bdevs_list": [ 00:22:01.729 { 00:22:01.729 "name": "BaseBdev1", 00:22:01.729 "uuid": "59f2e942-9400-400f-bbf8-b9cf5df8d38c", 00:22:01.729 "is_configured": true, 00:22:01.729 "data_offset": 0, 00:22:01.729 "data_size": 65536 00:22:01.729 }, 00:22:01.729 { 00:22:01.729 "name": "BaseBdev2", 00:22:01.729 "uuid": "251f607a-67e2-482f-abbd-ddef205a636a", 00:22:01.729 "is_configured": true, 00:22:01.729 "data_offset": 0, 00:22:01.729 "data_size": 65536 00:22:01.729 }, 00:22:01.729 { 00:22:01.729 "name": "BaseBdev3", 00:22:01.729 "uuid": "321cc34d-3dfc-4550-91e3-871328f0e79c", 00:22:01.729 "is_configured": true, 00:22:01.729 "data_offset": 0, 00:22:01.729 "data_size": 65536 00:22:01.729 }, 00:22:01.729 { 00:22:01.729 "name": "BaseBdev4", 00:22:01.729 "uuid": "0459ab0e-1d93-493d-8032-0cf9b4418fe0", 00:22:01.729 "is_configured": true, 00:22:01.729 "data_offset": 0, 00:22:01.729 "data_size": 65536 00:22:01.729 } 00:22:01.729 ] 00:22:01.729 } 00:22:01.729 } 00:22:01.729 }' 00:22:01.729 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:01.729 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:01.729 BaseBdev2 00:22:01.729 BaseBdev3 00:22:01.729 BaseBdev4' 00:22:01.729 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:01.729 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:01.729 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:01.990 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:01.990 "name": "BaseBdev1", 00:22:01.990 "aliases": [ 00:22:01.990 "59f2e942-9400-400f-bbf8-b9cf5df8d38c" 00:22:01.990 ], 00:22:01.990 "product_name": "Malloc disk", 00:22:01.990 "block_size": 512, 00:22:01.990 "num_blocks": 65536, 00:22:01.990 "uuid": "59f2e942-9400-400f-bbf8-b9cf5df8d38c", 00:22:01.990 "assigned_rate_limits": { 00:22:01.990 "rw_ios_per_sec": 0, 00:22:01.990 "rw_mbytes_per_sec": 0, 00:22:01.990 "r_mbytes_per_sec": 0, 00:22:01.990 "w_mbytes_per_sec": 0 00:22:01.990 }, 00:22:01.990 "claimed": true, 00:22:01.990 "claim_type": "exclusive_write", 00:22:01.990 "zoned": false, 00:22:01.990 "supported_io_types": { 00:22:01.990 "read": true, 00:22:01.990 "write": true, 00:22:01.990 "unmap": true, 00:22:01.990 "flush": true, 00:22:01.990 "reset": true, 00:22:01.990 "nvme_admin": false, 00:22:01.990 "nvme_io": false, 00:22:01.990 "nvme_io_md": false, 00:22:01.990 "write_zeroes": true, 00:22:01.990 "zcopy": true, 00:22:01.990 "get_zone_info": false, 00:22:01.990 "zone_management": false, 00:22:01.990 "zone_append": false, 00:22:01.990 "compare": false, 00:22:01.990 "compare_and_write": false, 00:22:01.990 "abort": true, 00:22:01.990 "seek_hole": false, 00:22:01.990 "seek_data": false, 00:22:01.990 "copy": true, 00:22:01.990 "nvme_iov_md": false 00:22:01.990 }, 00:22:01.990 "memory_domains": [ 00:22:01.990 { 00:22:01.990 "dma_device_id": "system", 00:22:01.990 "dma_device_type": 1 00:22:01.991 }, 00:22:01.991 { 00:22:01.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.991 "dma_device_type": 2 00:22:01.991 } 00:22:01.991 ], 00:22:01.991 "driver_specific": {} 00:22:01.991 }' 00:22:01.991 11:04:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:01.991 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:01.991 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:01.991 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:02.250 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:02.250 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:02.250 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:02.250 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:02.250 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:02.250 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:02.250 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:02.250 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:02.250 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:02.250 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:02.250 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:02.509 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:02.509 "name": "BaseBdev2", 00:22:02.509 "aliases": [ 00:22:02.509 "251f607a-67e2-482f-abbd-ddef205a636a" 00:22:02.509 ], 00:22:02.509 "product_name": "Malloc disk", 00:22:02.509 "block_size": 512, 00:22:02.509 "num_blocks": 65536, 00:22:02.509 "uuid": "251f607a-67e2-482f-abbd-ddef205a636a", 00:22:02.509 "assigned_rate_limits": { 00:22:02.509 "rw_ios_per_sec": 0, 00:22:02.509 "rw_mbytes_per_sec": 0, 00:22:02.509 "r_mbytes_per_sec": 0, 00:22:02.509 "w_mbytes_per_sec": 0 00:22:02.509 }, 00:22:02.509 "claimed": true, 00:22:02.509 "claim_type": "exclusive_write", 00:22:02.509 "zoned": false, 00:22:02.509 "supported_io_types": { 00:22:02.509 "read": true, 00:22:02.509 "write": true, 00:22:02.509 "unmap": true, 00:22:02.509 "flush": true, 00:22:02.509 "reset": true, 00:22:02.509 "nvme_admin": false, 00:22:02.509 "nvme_io": false, 00:22:02.509 "nvme_io_md": false, 00:22:02.509 "write_zeroes": true, 00:22:02.509 "zcopy": true, 00:22:02.509 "get_zone_info": false, 00:22:02.509 "zone_management": false, 00:22:02.509 "zone_append": false, 00:22:02.509 "compare": false, 00:22:02.509 "compare_and_write": false, 00:22:02.509 "abort": true, 00:22:02.509 "seek_hole": false, 00:22:02.509 "seek_data": false, 00:22:02.509 "copy": true, 00:22:02.509 "nvme_iov_md": false 00:22:02.509 }, 00:22:02.509 "memory_domains": [ 00:22:02.509 { 00:22:02.509 "dma_device_id": "system", 00:22:02.509 "dma_device_type": 1 00:22:02.509 }, 00:22:02.509 { 00:22:02.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:02.509 "dma_device_type": 2 00:22:02.509 } 00:22:02.509 ], 00:22:02.509 "driver_specific": {} 00:22:02.509 }' 00:22:02.509 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:02.509 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:02.769 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:02.769 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:02.769 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:02.769 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:02.769 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:02.769 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:02.769 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:02.769 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:02.769 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:03.028 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:03.028 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:03.028 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:03.028 11:04:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:03.028 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:03.028 "name": "BaseBdev3", 00:22:03.028 "aliases": [ 00:22:03.028 "321cc34d-3dfc-4550-91e3-871328f0e79c" 00:22:03.028 ], 00:22:03.028 "product_name": "Malloc disk", 00:22:03.028 "block_size": 512, 00:22:03.028 "num_blocks": 65536, 00:22:03.028 "uuid": "321cc34d-3dfc-4550-91e3-871328f0e79c", 00:22:03.028 "assigned_rate_limits": { 00:22:03.028 "rw_ios_per_sec": 0, 00:22:03.028 "rw_mbytes_per_sec": 0, 00:22:03.028 "r_mbytes_per_sec": 0, 00:22:03.028 "w_mbytes_per_sec": 0 00:22:03.028 }, 00:22:03.028 "claimed": true, 00:22:03.028 "claim_type": "exclusive_write", 00:22:03.028 "zoned": false, 00:22:03.028 "supported_io_types": { 00:22:03.028 "read": true, 00:22:03.028 "write": true, 00:22:03.028 "unmap": true, 00:22:03.028 "flush": true, 00:22:03.028 "reset": true, 00:22:03.028 "nvme_admin": false, 00:22:03.028 "nvme_io": false, 00:22:03.028 "nvme_io_md": false, 00:22:03.028 "write_zeroes": true, 00:22:03.028 "zcopy": true, 00:22:03.028 "get_zone_info": false, 00:22:03.028 "zone_management": false, 00:22:03.028 "zone_append": false, 00:22:03.028 "compare": false, 00:22:03.028 "compare_and_write": false, 00:22:03.028 "abort": true, 00:22:03.028 "seek_hole": false, 00:22:03.028 "seek_data": false, 00:22:03.028 "copy": true, 00:22:03.028 "nvme_iov_md": false 00:22:03.028 }, 00:22:03.028 "memory_domains": [ 00:22:03.028 { 00:22:03.028 "dma_device_id": "system", 00:22:03.028 "dma_device_type": 1 00:22:03.028 }, 00:22:03.028 { 00:22:03.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.028 "dma_device_type": 2 00:22:03.028 } 00:22:03.028 ], 00:22:03.028 "driver_specific": {} 00:22:03.029 }' 00:22:03.029 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:03.029 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:03.288 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:03.288 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:03.288 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:03.288 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:03.288 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:03.288 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:03.288 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:03.288 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:03.288 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:03.547 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:03.547 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:03.547 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:03.547 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:03.547 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:03.547 "name": "BaseBdev4", 00:22:03.547 "aliases": [ 00:22:03.547 "0459ab0e-1d93-493d-8032-0cf9b4418fe0" 00:22:03.547 ], 00:22:03.547 "product_name": "Malloc disk", 00:22:03.547 "block_size": 512, 00:22:03.547 "num_blocks": 65536, 00:22:03.547 "uuid": "0459ab0e-1d93-493d-8032-0cf9b4418fe0", 00:22:03.547 "assigned_rate_limits": { 00:22:03.547 "rw_ios_per_sec": 0, 00:22:03.547 "rw_mbytes_per_sec": 0, 00:22:03.547 "r_mbytes_per_sec": 0, 00:22:03.547 "w_mbytes_per_sec": 0 00:22:03.547 }, 00:22:03.547 "claimed": true, 00:22:03.547 "claim_type": "exclusive_write", 00:22:03.547 "zoned": false, 00:22:03.547 "supported_io_types": { 00:22:03.547 "read": true, 00:22:03.547 "write": true, 00:22:03.547 "unmap": true, 00:22:03.547 "flush": true, 00:22:03.547 "reset": true, 00:22:03.547 "nvme_admin": false, 00:22:03.547 "nvme_io": false, 00:22:03.547 "nvme_io_md": false, 00:22:03.547 "write_zeroes": true, 00:22:03.547 "zcopy": true, 00:22:03.547 "get_zone_info": false, 00:22:03.547 "zone_management": false, 00:22:03.547 "zone_append": false, 00:22:03.547 "compare": false, 00:22:03.547 "compare_and_write": false, 00:22:03.547 "abort": true, 00:22:03.547 "seek_hole": false, 00:22:03.547 "seek_data": false, 00:22:03.547 "copy": true, 00:22:03.547 "nvme_iov_md": false 00:22:03.547 }, 00:22:03.547 "memory_domains": [ 00:22:03.547 { 00:22:03.547 "dma_device_id": "system", 00:22:03.547 "dma_device_type": 1 00:22:03.547 }, 00:22:03.547 { 00:22:03.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:03.547 "dma_device_type": 2 00:22:03.547 } 00:22:03.547 ], 00:22:03.547 "driver_specific": {} 00:22:03.547 }' 00:22:03.547 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:03.806 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:03.806 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:03.806 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:03.806 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:03.806 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:03.806 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:03.806 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:03.806 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:03.806 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:03.806 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:04.065 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:04.065 11:04:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:04.065 [2024-07-25 11:04:11.165066] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:04.065 [2024-07-25 11:04:11.165102] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:04.065 [2024-07-25 11:04:11.165169] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.325 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.584 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.584 "name": "Existed_Raid", 00:22:04.584 "uuid": "4f28542c-a4dc-4f9a-98e3-74a6a0bc06ea", 00:22:04.584 "strip_size_kb": 64, 00:22:04.584 "state": "offline", 00:22:04.584 "raid_level": "raid0", 00:22:04.584 "superblock": false, 00:22:04.584 "num_base_bdevs": 4, 00:22:04.584 "num_base_bdevs_discovered": 3, 00:22:04.584 "num_base_bdevs_operational": 3, 00:22:04.584 "base_bdevs_list": [ 00:22:04.584 { 00:22:04.584 "name": null, 00:22:04.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.584 "is_configured": false, 00:22:04.584 "data_offset": 0, 00:22:04.584 "data_size": 65536 00:22:04.584 }, 00:22:04.584 { 00:22:04.584 "name": "BaseBdev2", 00:22:04.584 "uuid": "251f607a-67e2-482f-abbd-ddef205a636a", 00:22:04.584 "is_configured": true, 00:22:04.584 "data_offset": 0, 00:22:04.584 "data_size": 65536 00:22:04.584 }, 00:22:04.584 { 00:22:04.584 "name": "BaseBdev3", 00:22:04.584 "uuid": "321cc34d-3dfc-4550-91e3-871328f0e79c", 00:22:04.584 "is_configured": true, 00:22:04.584 "data_offset": 0, 00:22:04.584 "data_size": 65536 00:22:04.584 }, 00:22:04.584 { 00:22:04.584 "name": "BaseBdev4", 00:22:04.584 "uuid": "0459ab0e-1d93-493d-8032-0cf9b4418fe0", 00:22:04.584 "is_configured": true, 00:22:04.584 "data_offset": 0, 00:22:04.584 "data_size": 65536 00:22:04.584 } 00:22:04.584 ] 00:22:04.584 }' 00:22:04.584 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.584 11:04:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.153 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:05.153 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:05.153 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:05.153 11:04:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.153 11:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:05.153 11:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:05.153 11:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:05.412 [2024-07-25 11:04:12.379956] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:05.672 11:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:05.672 11:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:05.672 11:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.672 11:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:05.672 11:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:05.672 11:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:05.672 11:04:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:05.932 [2024-07-25 11:04:12.973872] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:06.191 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:06.191 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:06.191 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.191 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:06.450 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:06.450 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:06.450 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:06.709 [2024-07-25 11:04:13.569694] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:06.709 [2024-07-25 11:04:13.569748] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:06.709 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:06.709 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:06.709 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.709 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:06.968 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:06.968 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:06.968 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:06.968 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:06.968 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:06.968 11:04:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:07.228 BaseBdev2 00:22:07.228 11:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:07.228 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:07.228 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:07.228 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:07.228 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:07.228 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:07.228 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:07.487 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:07.746 [ 00:22:07.746 { 00:22:07.746 "name": "BaseBdev2", 00:22:07.746 "aliases": [ 00:22:07.746 "d1f47c32-6045-412d-9eb5-2d7e68406387" 00:22:07.746 ], 00:22:07.746 "product_name": "Malloc disk", 00:22:07.746 "block_size": 512, 00:22:07.746 "num_blocks": 65536, 00:22:07.746 "uuid": "d1f47c32-6045-412d-9eb5-2d7e68406387", 00:22:07.746 "assigned_rate_limits": { 00:22:07.746 "rw_ios_per_sec": 0, 00:22:07.746 "rw_mbytes_per_sec": 0, 00:22:07.746 "r_mbytes_per_sec": 0, 00:22:07.746 "w_mbytes_per_sec": 0 00:22:07.746 }, 00:22:07.746 "claimed": false, 00:22:07.746 "zoned": false, 00:22:07.746 "supported_io_types": { 00:22:07.746 "read": true, 00:22:07.746 "write": true, 00:22:07.746 "unmap": true, 00:22:07.746 "flush": true, 00:22:07.746 "reset": true, 00:22:07.746 "nvme_admin": false, 00:22:07.746 "nvme_io": false, 00:22:07.746 "nvme_io_md": false, 00:22:07.746 "write_zeroes": true, 00:22:07.746 "zcopy": true, 00:22:07.746 "get_zone_info": false, 00:22:07.746 "zone_management": false, 00:22:07.746 "zone_append": false, 00:22:07.746 "compare": false, 00:22:07.746 "compare_and_write": false, 00:22:07.746 "abort": true, 00:22:07.746 "seek_hole": false, 00:22:07.746 "seek_data": false, 00:22:07.746 "copy": true, 00:22:07.746 "nvme_iov_md": false 00:22:07.746 }, 00:22:07.746 "memory_domains": [ 00:22:07.746 { 00:22:07.746 "dma_device_id": "system", 00:22:07.746 "dma_device_type": 1 00:22:07.746 }, 00:22:07.746 { 00:22:07.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.746 "dma_device_type": 2 00:22:07.746 } 00:22:07.746 ], 00:22:07.746 "driver_specific": {} 00:22:07.746 } 00:22:07.746 ] 00:22:07.746 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:07.746 11:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:07.746 11:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:07.746 11:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:08.006 BaseBdev3 00:22:08.006 11:04:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:08.006 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:08.006 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:08.006 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:08.006 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:08.006 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:08.006 11:04:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:08.265 11:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:08.265 [ 00:22:08.265 { 00:22:08.265 "name": "BaseBdev3", 00:22:08.265 "aliases": [ 00:22:08.266 "d2d0404c-0f4c-439d-86da-71383488ca2b" 00:22:08.266 ], 00:22:08.266 "product_name": "Malloc disk", 00:22:08.266 "block_size": 512, 00:22:08.266 "num_blocks": 65536, 00:22:08.266 "uuid": "d2d0404c-0f4c-439d-86da-71383488ca2b", 00:22:08.266 "assigned_rate_limits": { 00:22:08.266 "rw_ios_per_sec": 0, 00:22:08.266 "rw_mbytes_per_sec": 0, 00:22:08.266 "r_mbytes_per_sec": 0, 00:22:08.266 "w_mbytes_per_sec": 0 00:22:08.266 }, 00:22:08.266 "claimed": false, 00:22:08.266 "zoned": false, 00:22:08.266 "supported_io_types": { 00:22:08.266 "read": true, 00:22:08.266 "write": true, 00:22:08.266 "unmap": true, 00:22:08.266 "flush": true, 00:22:08.266 "reset": true, 00:22:08.266 "nvme_admin": false, 00:22:08.266 "nvme_io": false, 00:22:08.266 "nvme_io_md": false, 00:22:08.266 "write_zeroes": true, 00:22:08.266 "zcopy": true, 00:22:08.266 "get_zone_info": false, 00:22:08.266 "zone_management": false, 00:22:08.266 "zone_append": false, 00:22:08.266 "compare": false, 00:22:08.266 "compare_and_write": false, 00:22:08.266 "abort": true, 00:22:08.266 "seek_hole": false, 00:22:08.266 "seek_data": false, 00:22:08.266 "copy": true, 00:22:08.266 "nvme_iov_md": false 00:22:08.266 }, 00:22:08.266 "memory_domains": [ 00:22:08.266 { 00:22:08.266 "dma_device_id": "system", 00:22:08.266 "dma_device_type": 1 00:22:08.266 }, 00:22:08.266 { 00:22:08.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.266 "dma_device_type": 2 00:22:08.266 } 00:22:08.266 ], 00:22:08.266 "driver_specific": {} 00:22:08.266 } 00:22:08.266 ] 00:22:08.525 11:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:08.525 11:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:08.525 11:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:08.525 11:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:08.525 BaseBdev4 00:22:08.784 11:04:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:08.784 11:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:08.784 11:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:08.784 11:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:08.784 11:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:08.784 11:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:08.784 11:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:08.784 11:04:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:09.043 [ 00:22:09.043 { 00:22:09.043 "name": "BaseBdev4", 00:22:09.043 "aliases": [ 00:22:09.043 "323d7cc3-7087-4e3d-ba32-cb12414c3b89" 00:22:09.043 ], 00:22:09.043 "product_name": "Malloc disk", 00:22:09.043 "block_size": 512, 00:22:09.043 "num_blocks": 65536, 00:22:09.043 "uuid": "323d7cc3-7087-4e3d-ba32-cb12414c3b89", 00:22:09.043 "assigned_rate_limits": { 00:22:09.043 "rw_ios_per_sec": 0, 00:22:09.043 "rw_mbytes_per_sec": 0, 00:22:09.043 "r_mbytes_per_sec": 0, 00:22:09.043 "w_mbytes_per_sec": 0 00:22:09.043 }, 00:22:09.043 "claimed": false, 00:22:09.043 "zoned": false, 00:22:09.043 "supported_io_types": { 00:22:09.043 "read": true, 00:22:09.043 "write": true, 00:22:09.043 "unmap": true, 00:22:09.043 "flush": true, 00:22:09.043 "reset": true, 00:22:09.043 "nvme_admin": false, 00:22:09.043 "nvme_io": false, 00:22:09.043 "nvme_io_md": false, 00:22:09.043 "write_zeroes": true, 00:22:09.043 "zcopy": true, 00:22:09.043 "get_zone_info": false, 00:22:09.043 "zone_management": false, 00:22:09.043 "zone_append": false, 00:22:09.043 "compare": false, 00:22:09.043 "compare_and_write": false, 00:22:09.043 "abort": true, 00:22:09.043 "seek_hole": false, 00:22:09.043 "seek_data": false, 00:22:09.043 "copy": true, 00:22:09.043 "nvme_iov_md": false 00:22:09.043 }, 00:22:09.043 "memory_domains": [ 00:22:09.043 { 00:22:09.043 "dma_device_id": "system", 00:22:09.043 "dma_device_type": 1 00:22:09.043 }, 00:22:09.043 { 00:22:09.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.043 "dma_device_type": 2 00:22:09.043 } 00:22:09.043 ], 00:22:09.043 "driver_specific": {} 00:22:09.043 } 00:22:09.043 ] 00:22:09.043 11:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:09.043 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:09.043 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:09.043 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:09.302 [2024-07-25 11:04:16.297947] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:09.302 [2024-07-25 11:04:16.297992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:09.302 [2024-07-25 11:04:16.298023] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:09.302 [2024-07-25 11:04:16.300322] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:09.302 [2024-07-25 11:04:16.300382] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:09.302 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:09.302 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:09.302 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:09.302 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:09.302 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:09.302 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:09.302 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:09.302 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:09.302 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:09.302 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:09.302 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.302 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.561 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:09.561 "name": "Existed_Raid", 00:22:09.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.561 "strip_size_kb": 64, 00:22:09.561 "state": "configuring", 00:22:09.561 "raid_level": "raid0", 00:22:09.561 "superblock": false, 00:22:09.561 "num_base_bdevs": 4, 00:22:09.561 "num_base_bdevs_discovered": 3, 00:22:09.561 "num_base_bdevs_operational": 4, 00:22:09.561 "base_bdevs_list": [ 00:22:09.561 { 00:22:09.561 "name": "BaseBdev1", 00:22:09.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.561 "is_configured": false, 00:22:09.561 "data_offset": 0, 00:22:09.561 "data_size": 0 00:22:09.561 }, 00:22:09.561 { 00:22:09.561 "name": "BaseBdev2", 00:22:09.561 "uuid": "d1f47c32-6045-412d-9eb5-2d7e68406387", 00:22:09.561 "is_configured": true, 00:22:09.561 "data_offset": 0, 00:22:09.561 "data_size": 65536 00:22:09.561 }, 00:22:09.562 { 00:22:09.562 "name": "BaseBdev3", 00:22:09.562 "uuid": "d2d0404c-0f4c-439d-86da-71383488ca2b", 00:22:09.562 "is_configured": true, 00:22:09.562 "data_offset": 0, 00:22:09.562 "data_size": 65536 00:22:09.562 }, 00:22:09.562 { 00:22:09.562 "name": "BaseBdev4", 00:22:09.562 "uuid": "323d7cc3-7087-4e3d-ba32-cb12414c3b89", 00:22:09.562 "is_configured": true, 00:22:09.562 "data_offset": 0, 00:22:09.562 "data_size": 65536 00:22:09.562 } 00:22:09.562 ] 00:22:09.562 }' 00:22:09.562 11:04:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:09.562 11:04:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.128 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:10.387 [2024-07-25 11:04:17.292575] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.387 "name": "Existed_Raid", 00:22:10.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.387 "strip_size_kb": 64, 00:22:10.387 "state": "configuring", 00:22:10.387 "raid_level": "raid0", 00:22:10.387 "superblock": false, 00:22:10.387 "num_base_bdevs": 4, 00:22:10.387 "num_base_bdevs_discovered": 2, 00:22:10.387 "num_base_bdevs_operational": 4, 00:22:10.387 "base_bdevs_list": [ 00:22:10.387 { 00:22:10.387 "name": "BaseBdev1", 00:22:10.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.387 "is_configured": false, 00:22:10.387 "data_offset": 0, 00:22:10.387 "data_size": 0 00:22:10.387 }, 00:22:10.387 { 00:22:10.387 "name": null, 00:22:10.387 "uuid": "d1f47c32-6045-412d-9eb5-2d7e68406387", 00:22:10.387 "is_configured": false, 00:22:10.387 "data_offset": 0, 00:22:10.387 "data_size": 65536 00:22:10.387 }, 00:22:10.387 { 00:22:10.387 "name": "BaseBdev3", 00:22:10.387 "uuid": "d2d0404c-0f4c-439d-86da-71383488ca2b", 00:22:10.387 "is_configured": true, 00:22:10.387 "data_offset": 0, 00:22:10.387 "data_size": 65536 00:22:10.387 }, 00:22:10.387 { 00:22:10.387 "name": "BaseBdev4", 00:22:10.387 "uuid": "323d7cc3-7087-4e3d-ba32-cb12414c3b89", 00:22:10.387 "is_configured": true, 00:22:10.387 "data_offset": 0, 00:22:10.387 "data_size": 65536 00:22:10.387 } 00:22:10.387 ] 00:22:10.387 }' 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.387 11:04:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.955 11:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.955 11:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:11.214 11:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:11.214 11:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:11.473 [2024-07-25 11:04:18.557972] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:11.473 BaseBdev1 00:22:11.473 11:04:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:11.473 11:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:11.473 11:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:11.473 11:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:11.473 11:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:11.473 11:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:11.473 11:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:11.732 11:04:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:11.991 [ 00:22:11.991 { 00:22:11.991 "name": "BaseBdev1", 00:22:11.991 "aliases": [ 00:22:11.991 "4d8d603b-d5b3-4d37-b404-515ddc9edbe6" 00:22:11.991 ], 00:22:11.991 "product_name": "Malloc disk", 00:22:11.991 "block_size": 512, 00:22:11.991 "num_blocks": 65536, 00:22:11.991 "uuid": "4d8d603b-d5b3-4d37-b404-515ddc9edbe6", 00:22:11.991 "assigned_rate_limits": { 00:22:11.991 "rw_ios_per_sec": 0, 00:22:11.991 "rw_mbytes_per_sec": 0, 00:22:11.991 "r_mbytes_per_sec": 0, 00:22:11.991 "w_mbytes_per_sec": 0 00:22:11.991 }, 00:22:11.991 "claimed": true, 00:22:11.991 "claim_type": "exclusive_write", 00:22:11.991 "zoned": false, 00:22:11.991 "supported_io_types": { 00:22:11.991 "read": true, 00:22:11.991 "write": true, 00:22:11.991 "unmap": true, 00:22:11.991 "flush": true, 00:22:11.991 "reset": true, 00:22:11.991 "nvme_admin": false, 00:22:11.991 "nvme_io": false, 00:22:11.991 "nvme_io_md": false, 00:22:11.991 "write_zeroes": true, 00:22:11.991 "zcopy": true, 00:22:11.991 "get_zone_info": false, 00:22:11.991 "zone_management": false, 00:22:11.991 "zone_append": false, 00:22:11.991 "compare": false, 00:22:11.991 "compare_and_write": false, 00:22:11.991 "abort": true, 00:22:11.991 "seek_hole": false, 00:22:11.991 "seek_data": false, 00:22:11.991 "copy": true, 00:22:11.991 "nvme_iov_md": false 00:22:11.991 }, 00:22:11.991 "memory_domains": [ 00:22:11.991 { 00:22:11.991 "dma_device_id": "system", 00:22:11.991 "dma_device_type": 1 00:22:11.991 }, 00:22:11.991 { 00:22:11.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:11.991 "dma_device_type": 2 00:22:11.991 } 00:22:11.991 ], 00:22:11.991 "driver_specific": {} 00:22:11.991 } 00:22:11.991 ] 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.991 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.250 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.250 "name": "Existed_Raid", 00:22:12.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.250 "strip_size_kb": 64, 00:22:12.250 "state": "configuring", 00:22:12.250 "raid_level": "raid0", 00:22:12.250 "superblock": false, 00:22:12.250 "num_base_bdevs": 4, 00:22:12.250 "num_base_bdevs_discovered": 3, 00:22:12.250 "num_base_bdevs_operational": 4, 00:22:12.250 "base_bdevs_list": [ 00:22:12.250 { 00:22:12.250 "name": "BaseBdev1", 00:22:12.250 "uuid": "4d8d603b-d5b3-4d37-b404-515ddc9edbe6", 00:22:12.250 "is_configured": true, 00:22:12.250 "data_offset": 0, 00:22:12.250 "data_size": 65536 00:22:12.250 }, 00:22:12.250 { 00:22:12.250 "name": null, 00:22:12.250 "uuid": "d1f47c32-6045-412d-9eb5-2d7e68406387", 00:22:12.250 "is_configured": false, 00:22:12.250 "data_offset": 0, 00:22:12.250 "data_size": 65536 00:22:12.250 }, 00:22:12.250 { 00:22:12.250 "name": "BaseBdev3", 00:22:12.250 "uuid": "d2d0404c-0f4c-439d-86da-71383488ca2b", 00:22:12.250 "is_configured": true, 00:22:12.250 "data_offset": 0, 00:22:12.250 "data_size": 65536 00:22:12.250 }, 00:22:12.250 { 00:22:12.250 "name": "BaseBdev4", 00:22:12.250 "uuid": "323d7cc3-7087-4e3d-ba32-cb12414c3b89", 00:22:12.250 "is_configured": true, 00:22:12.250 "data_offset": 0, 00:22:12.250 "data_size": 65536 00:22:12.250 } 00:22:12.250 ] 00:22:12.250 }' 00:22:12.250 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.250 11:04:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:12.818 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.818 11:04:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:13.076 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:13.076 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:13.335 [2024-07-25 11:04:20.234645] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:13.335 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:13.335 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:13.335 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:13.335 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:13.335 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:13.335 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:13.335 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.335 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.335 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.335 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.335 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.335 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:13.594 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.594 "name": "Existed_Raid", 00:22:13.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.594 "strip_size_kb": 64, 00:22:13.594 "state": "configuring", 00:22:13.594 "raid_level": "raid0", 00:22:13.594 "superblock": false, 00:22:13.594 "num_base_bdevs": 4, 00:22:13.594 "num_base_bdevs_discovered": 2, 00:22:13.594 "num_base_bdevs_operational": 4, 00:22:13.594 "base_bdevs_list": [ 00:22:13.594 { 00:22:13.594 "name": "BaseBdev1", 00:22:13.594 "uuid": "4d8d603b-d5b3-4d37-b404-515ddc9edbe6", 00:22:13.594 "is_configured": true, 00:22:13.594 "data_offset": 0, 00:22:13.594 "data_size": 65536 00:22:13.594 }, 00:22:13.594 { 00:22:13.594 "name": null, 00:22:13.594 "uuid": "d1f47c32-6045-412d-9eb5-2d7e68406387", 00:22:13.594 "is_configured": false, 00:22:13.594 "data_offset": 0, 00:22:13.594 "data_size": 65536 00:22:13.594 }, 00:22:13.594 { 00:22:13.594 "name": null, 00:22:13.594 "uuid": "d2d0404c-0f4c-439d-86da-71383488ca2b", 00:22:13.594 "is_configured": false, 00:22:13.594 "data_offset": 0, 00:22:13.594 "data_size": 65536 00:22:13.594 }, 00:22:13.594 { 00:22:13.594 "name": "BaseBdev4", 00:22:13.594 "uuid": "323d7cc3-7087-4e3d-ba32-cb12414c3b89", 00:22:13.594 "is_configured": true, 00:22:13.594 "data_offset": 0, 00:22:13.594 "data_size": 65536 00:22:13.594 } 00:22:13.594 ] 00:22:13.594 }' 00:22:13.594 11:04:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.594 11:04:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.162 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.162 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:14.162 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:14.162 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:14.421 [2024-07-25 11:04:21.486014] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:14.421 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:14.421 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:14.421 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:14.421 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:14.421 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:14.421 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:14.421 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:14.421 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:14.421 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:14.421 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:14.421 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.421 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:14.724 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:14.724 "name": "Existed_Raid", 00:22:14.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.724 "strip_size_kb": 64, 00:22:14.724 "state": "configuring", 00:22:14.724 "raid_level": "raid0", 00:22:14.724 "superblock": false, 00:22:14.724 "num_base_bdevs": 4, 00:22:14.724 "num_base_bdevs_discovered": 3, 00:22:14.724 "num_base_bdevs_operational": 4, 00:22:14.724 "base_bdevs_list": [ 00:22:14.724 { 00:22:14.724 "name": "BaseBdev1", 00:22:14.724 "uuid": "4d8d603b-d5b3-4d37-b404-515ddc9edbe6", 00:22:14.724 "is_configured": true, 00:22:14.724 "data_offset": 0, 00:22:14.724 "data_size": 65536 00:22:14.724 }, 00:22:14.724 { 00:22:14.724 "name": null, 00:22:14.724 "uuid": "d1f47c32-6045-412d-9eb5-2d7e68406387", 00:22:14.724 "is_configured": false, 00:22:14.724 "data_offset": 0, 00:22:14.724 "data_size": 65536 00:22:14.724 }, 00:22:14.724 { 00:22:14.724 "name": "BaseBdev3", 00:22:14.724 "uuid": "d2d0404c-0f4c-439d-86da-71383488ca2b", 00:22:14.724 "is_configured": true, 00:22:14.724 "data_offset": 0, 00:22:14.724 "data_size": 65536 00:22:14.724 }, 00:22:14.724 { 00:22:14.724 "name": "BaseBdev4", 00:22:14.724 "uuid": "323d7cc3-7087-4e3d-ba32-cb12414c3b89", 00:22:14.724 "is_configured": true, 00:22:14.724 "data_offset": 0, 00:22:14.724 "data_size": 65536 00:22:14.724 } 00:22:14.724 ] 00:22:14.724 }' 00:22:14.724 11:04:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:14.724 11:04:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:15.294 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.294 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:15.553 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:15.553 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:15.812 [2024-07-25 11:04:22.765534] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:15.812 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:15.812 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:15.812 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:15.812 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:15.812 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:15.812 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:15.812 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:15.812 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:15.812 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:15.812 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:15.812 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.812 11:04:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:16.071 11:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:16.071 "name": "Existed_Raid", 00:22:16.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.071 "strip_size_kb": 64, 00:22:16.071 "state": "configuring", 00:22:16.071 "raid_level": "raid0", 00:22:16.071 "superblock": false, 00:22:16.071 "num_base_bdevs": 4, 00:22:16.071 "num_base_bdevs_discovered": 2, 00:22:16.071 "num_base_bdevs_operational": 4, 00:22:16.071 "base_bdevs_list": [ 00:22:16.071 { 00:22:16.071 "name": null, 00:22:16.071 "uuid": "4d8d603b-d5b3-4d37-b404-515ddc9edbe6", 00:22:16.071 "is_configured": false, 00:22:16.071 "data_offset": 0, 00:22:16.071 "data_size": 65536 00:22:16.071 }, 00:22:16.071 { 00:22:16.071 "name": null, 00:22:16.071 "uuid": "d1f47c32-6045-412d-9eb5-2d7e68406387", 00:22:16.071 "is_configured": false, 00:22:16.071 "data_offset": 0, 00:22:16.071 "data_size": 65536 00:22:16.071 }, 00:22:16.071 { 00:22:16.071 "name": "BaseBdev3", 00:22:16.071 "uuid": "d2d0404c-0f4c-439d-86da-71383488ca2b", 00:22:16.071 "is_configured": true, 00:22:16.071 "data_offset": 0, 00:22:16.071 "data_size": 65536 00:22:16.071 }, 00:22:16.071 { 00:22:16.071 "name": "BaseBdev4", 00:22:16.071 "uuid": "323d7cc3-7087-4e3d-ba32-cb12414c3b89", 00:22:16.071 "is_configured": true, 00:22:16.071 "data_offset": 0, 00:22:16.071 "data_size": 65536 00:22:16.071 } 00:22:16.071 ] 00:22:16.071 }' 00:22:16.071 11:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:16.071 11:04:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:16.637 11:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:16.637 11:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:16.895 11:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:16.895 11:04:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:17.153 [2024-07-25 11:04:24.131053] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:17.153 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:17.153 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:17.153 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:17.153 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:17.153 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:17.153 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:17.153 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:17.153 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:17.153 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:17.153 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:17.153 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.153 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:17.411 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:17.411 "name": "Existed_Raid", 00:22:17.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.411 "strip_size_kb": 64, 00:22:17.411 "state": "configuring", 00:22:17.411 "raid_level": "raid0", 00:22:17.411 "superblock": false, 00:22:17.411 "num_base_bdevs": 4, 00:22:17.411 "num_base_bdevs_discovered": 3, 00:22:17.411 "num_base_bdevs_operational": 4, 00:22:17.411 "base_bdevs_list": [ 00:22:17.411 { 00:22:17.411 "name": null, 00:22:17.411 "uuid": "4d8d603b-d5b3-4d37-b404-515ddc9edbe6", 00:22:17.411 "is_configured": false, 00:22:17.411 "data_offset": 0, 00:22:17.411 "data_size": 65536 00:22:17.411 }, 00:22:17.411 { 00:22:17.411 "name": "BaseBdev2", 00:22:17.411 "uuid": "d1f47c32-6045-412d-9eb5-2d7e68406387", 00:22:17.411 "is_configured": true, 00:22:17.411 "data_offset": 0, 00:22:17.411 "data_size": 65536 00:22:17.411 }, 00:22:17.411 { 00:22:17.411 "name": "BaseBdev3", 00:22:17.411 "uuid": "d2d0404c-0f4c-439d-86da-71383488ca2b", 00:22:17.411 "is_configured": true, 00:22:17.411 "data_offset": 0, 00:22:17.411 "data_size": 65536 00:22:17.411 }, 00:22:17.411 { 00:22:17.411 "name": "BaseBdev4", 00:22:17.411 "uuid": "323d7cc3-7087-4e3d-ba32-cb12414c3b89", 00:22:17.411 "is_configured": true, 00:22:17.411 "data_offset": 0, 00:22:17.411 "data_size": 65536 00:22:17.411 } 00:22:17.411 ] 00:22:17.411 }' 00:22:17.411 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:17.411 11:04:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:17.977 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:17.977 11:04:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:18.235 11:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:18.235 11:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.235 11:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:18.235 11:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4d8d603b-d5b3-4d37-b404-515ddc9edbe6 00:22:18.494 [2024-07-25 11:04:25.588949] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:18.494 [2024-07-25 11:04:25.588996] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:18.494 [2024-07-25 11:04:25.589008] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:18.494 [2024-07-25 11:04:25.589339] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:22:18.494 [2024-07-25 11:04:25.589540] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:18.494 [2024-07-25 11:04:25.589558] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:18.494 [2024-07-25 11:04:25.589846] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:18.494 NewBaseBdev 00:22:18.494 11:04:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:18.494 11:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:22:18.494 11:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:18.494 11:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:22:18.494 11:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:18.494 11:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:18.494 11:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:18.752 11:04:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:19.011 [ 00:22:19.011 { 00:22:19.011 "name": "NewBaseBdev", 00:22:19.011 "aliases": [ 00:22:19.011 "4d8d603b-d5b3-4d37-b404-515ddc9edbe6" 00:22:19.011 ], 00:22:19.011 "product_name": "Malloc disk", 00:22:19.011 "block_size": 512, 00:22:19.011 "num_blocks": 65536, 00:22:19.011 "uuid": "4d8d603b-d5b3-4d37-b404-515ddc9edbe6", 00:22:19.011 "assigned_rate_limits": { 00:22:19.011 "rw_ios_per_sec": 0, 00:22:19.011 "rw_mbytes_per_sec": 0, 00:22:19.011 "r_mbytes_per_sec": 0, 00:22:19.011 "w_mbytes_per_sec": 0 00:22:19.011 }, 00:22:19.011 "claimed": true, 00:22:19.011 "claim_type": "exclusive_write", 00:22:19.011 "zoned": false, 00:22:19.011 "supported_io_types": { 00:22:19.011 "read": true, 00:22:19.011 "write": true, 00:22:19.011 "unmap": true, 00:22:19.011 "flush": true, 00:22:19.011 "reset": true, 00:22:19.011 "nvme_admin": false, 00:22:19.011 "nvme_io": false, 00:22:19.011 "nvme_io_md": false, 00:22:19.011 "write_zeroes": true, 00:22:19.011 "zcopy": true, 00:22:19.011 "get_zone_info": false, 00:22:19.011 "zone_management": false, 00:22:19.011 "zone_append": false, 00:22:19.011 "compare": false, 00:22:19.011 "compare_and_write": false, 00:22:19.011 "abort": true, 00:22:19.011 "seek_hole": false, 00:22:19.011 "seek_data": false, 00:22:19.011 "copy": true, 00:22:19.011 "nvme_iov_md": false 00:22:19.011 }, 00:22:19.011 "memory_domains": [ 00:22:19.011 { 00:22:19.011 "dma_device_id": "system", 00:22:19.011 "dma_device_type": 1 00:22:19.011 }, 00:22:19.011 { 00:22:19.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:19.011 "dma_device_type": 2 00:22:19.011 } 00:22:19.011 ], 00:22:19.011 "driver_specific": {} 00:22:19.011 } 00:22:19.011 ] 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.011 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.269 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.269 "name": "Existed_Raid", 00:22:19.269 "uuid": "46da737b-6aa9-4980-af4e-cddd15d2c501", 00:22:19.269 "strip_size_kb": 64, 00:22:19.269 "state": "online", 00:22:19.269 "raid_level": "raid0", 00:22:19.269 "superblock": false, 00:22:19.269 "num_base_bdevs": 4, 00:22:19.269 "num_base_bdevs_discovered": 4, 00:22:19.269 "num_base_bdevs_operational": 4, 00:22:19.269 "base_bdevs_list": [ 00:22:19.269 { 00:22:19.269 "name": "NewBaseBdev", 00:22:19.269 "uuid": "4d8d603b-d5b3-4d37-b404-515ddc9edbe6", 00:22:19.269 "is_configured": true, 00:22:19.269 "data_offset": 0, 00:22:19.269 "data_size": 65536 00:22:19.269 }, 00:22:19.269 { 00:22:19.269 "name": "BaseBdev2", 00:22:19.269 "uuid": "d1f47c32-6045-412d-9eb5-2d7e68406387", 00:22:19.269 "is_configured": true, 00:22:19.269 "data_offset": 0, 00:22:19.269 "data_size": 65536 00:22:19.269 }, 00:22:19.269 { 00:22:19.269 "name": "BaseBdev3", 00:22:19.269 "uuid": "d2d0404c-0f4c-439d-86da-71383488ca2b", 00:22:19.269 "is_configured": true, 00:22:19.269 "data_offset": 0, 00:22:19.269 "data_size": 65536 00:22:19.269 }, 00:22:19.269 { 00:22:19.269 "name": "BaseBdev4", 00:22:19.269 "uuid": "323d7cc3-7087-4e3d-ba32-cb12414c3b89", 00:22:19.269 "is_configured": true, 00:22:19.269 "data_offset": 0, 00:22:19.269 "data_size": 65536 00:22:19.269 } 00:22:19.269 ] 00:22:19.269 }' 00:22:19.269 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.269 11:04:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:19.833 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:19.833 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:19.833 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:19.833 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:19.833 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:19.833 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:19.833 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:19.833 11:04:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:20.092 [2024-07-25 11:04:27.057403] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:20.092 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:20.092 "name": "Existed_Raid", 00:22:20.092 "aliases": [ 00:22:20.092 "46da737b-6aa9-4980-af4e-cddd15d2c501" 00:22:20.092 ], 00:22:20.092 "product_name": "Raid Volume", 00:22:20.092 "block_size": 512, 00:22:20.092 "num_blocks": 262144, 00:22:20.092 "uuid": "46da737b-6aa9-4980-af4e-cddd15d2c501", 00:22:20.092 "assigned_rate_limits": { 00:22:20.092 "rw_ios_per_sec": 0, 00:22:20.092 "rw_mbytes_per_sec": 0, 00:22:20.092 "r_mbytes_per_sec": 0, 00:22:20.092 "w_mbytes_per_sec": 0 00:22:20.092 }, 00:22:20.092 "claimed": false, 00:22:20.092 "zoned": false, 00:22:20.092 "supported_io_types": { 00:22:20.092 "read": true, 00:22:20.092 "write": true, 00:22:20.092 "unmap": true, 00:22:20.092 "flush": true, 00:22:20.092 "reset": true, 00:22:20.092 "nvme_admin": false, 00:22:20.092 "nvme_io": false, 00:22:20.092 "nvme_io_md": false, 00:22:20.092 "write_zeroes": true, 00:22:20.092 "zcopy": false, 00:22:20.092 "get_zone_info": false, 00:22:20.092 "zone_management": false, 00:22:20.092 "zone_append": false, 00:22:20.092 "compare": false, 00:22:20.092 "compare_and_write": false, 00:22:20.092 "abort": false, 00:22:20.092 "seek_hole": false, 00:22:20.092 "seek_data": false, 00:22:20.092 "copy": false, 00:22:20.092 "nvme_iov_md": false 00:22:20.092 }, 00:22:20.092 "memory_domains": [ 00:22:20.092 { 00:22:20.092 "dma_device_id": "system", 00:22:20.092 "dma_device_type": 1 00:22:20.092 }, 00:22:20.092 { 00:22:20.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.092 "dma_device_type": 2 00:22:20.092 }, 00:22:20.092 { 00:22:20.092 "dma_device_id": "system", 00:22:20.092 "dma_device_type": 1 00:22:20.092 }, 00:22:20.092 { 00:22:20.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.092 "dma_device_type": 2 00:22:20.092 }, 00:22:20.092 { 00:22:20.092 "dma_device_id": "system", 00:22:20.092 "dma_device_type": 1 00:22:20.092 }, 00:22:20.092 { 00:22:20.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.092 "dma_device_type": 2 00:22:20.092 }, 00:22:20.092 { 00:22:20.092 "dma_device_id": "system", 00:22:20.092 "dma_device_type": 1 00:22:20.092 }, 00:22:20.092 { 00:22:20.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.092 "dma_device_type": 2 00:22:20.092 } 00:22:20.092 ], 00:22:20.092 "driver_specific": { 00:22:20.092 "raid": { 00:22:20.092 "uuid": "46da737b-6aa9-4980-af4e-cddd15d2c501", 00:22:20.092 "strip_size_kb": 64, 00:22:20.092 "state": "online", 00:22:20.092 "raid_level": "raid0", 00:22:20.092 "superblock": false, 00:22:20.092 "num_base_bdevs": 4, 00:22:20.092 "num_base_bdevs_discovered": 4, 00:22:20.092 "num_base_bdevs_operational": 4, 00:22:20.092 "base_bdevs_list": [ 00:22:20.092 { 00:22:20.092 "name": "NewBaseBdev", 00:22:20.092 "uuid": "4d8d603b-d5b3-4d37-b404-515ddc9edbe6", 00:22:20.092 "is_configured": true, 00:22:20.092 "data_offset": 0, 00:22:20.092 "data_size": 65536 00:22:20.092 }, 00:22:20.092 { 00:22:20.092 "name": "BaseBdev2", 00:22:20.092 "uuid": "d1f47c32-6045-412d-9eb5-2d7e68406387", 00:22:20.092 "is_configured": true, 00:22:20.092 "data_offset": 0, 00:22:20.092 "data_size": 65536 00:22:20.092 }, 00:22:20.092 { 00:22:20.092 "name": "BaseBdev3", 00:22:20.092 "uuid": "d2d0404c-0f4c-439d-86da-71383488ca2b", 00:22:20.092 "is_configured": true, 00:22:20.092 "data_offset": 0, 00:22:20.092 "data_size": 65536 00:22:20.092 }, 00:22:20.092 { 00:22:20.092 "name": "BaseBdev4", 00:22:20.092 "uuid": "323d7cc3-7087-4e3d-ba32-cb12414c3b89", 00:22:20.092 "is_configured": true, 00:22:20.092 "data_offset": 0, 00:22:20.092 "data_size": 65536 00:22:20.092 } 00:22:20.092 ] 00:22:20.092 } 00:22:20.092 } 00:22:20.092 }' 00:22:20.092 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:20.092 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:20.092 BaseBdev2 00:22:20.092 BaseBdev3 00:22:20.092 BaseBdev4' 00:22:20.092 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.092 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:20.092 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.351 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.351 "name": "NewBaseBdev", 00:22:20.351 "aliases": [ 00:22:20.351 "4d8d603b-d5b3-4d37-b404-515ddc9edbe6" 00:22:20.351 ], 00:22:20.351 "product_name": "Malloc disk", 00:22:20.351 "block_size": 512, 00:22:20.351 "num_blocks": 65536, 00:22:20.351 "uuid": "4d8d603b-d5b3-4d37-b404-515ddc9edbe6", 00:22:20.351 "assigned_rate_limits": { 00:22:20.351 "rw_ios_per_sec": 0, 00:22:20.351 "rw_mbytes_per_sec": 0, 00:22:20.351 "r_mbytes_per_sec": 0, 00:22:20.351 "w_mbytes_per_sec": 0 00:22:20.351 }, 00:22:20.351 "claimed": true, 00:22:20.351 "claim_type": "exclusive_write", 00:22:20.351 "zoned": false, 00:22:20.351 "supported_io_types": { 00:22:20.351 "read": true, 00:22:20.351 "write": true, 00:22:20.351 "unmap": true, 00:22:20.351 "flush": true, 00:22:20.351 "reset": true, 00:22:20.351 "nvme_admin": false, 00:22:20.351 "nvme_io": false, 00:22:20.351 "nvme_io_md": false, 00:22:20.351 "write_zeroes": true, 00:22:20.351 "zcopy": true, 00:22:20.351 "get_zone_info": false, 00:22:20.351 "zone_management": false, 00:22:20.351 "zone_append": false, 00:22:20.351 "compare": false, 00:22:20.351 "compare_and_write": false, 00:22:20.351 "abort": true, 00:22:20.351 "seek_hole": false, 00:22:20.351 "seek_data": false, 00:22:20.351 "copy": true, 00:22:20.351 "nvme_iov_md": false 00:22:20.351 }, 00:22:20.351 "memory_domains": [ 00:22:20.351 { 00:22:20.351 "dma_device_id": "system", 00:22:20.351 "dma_device_type": 1 00:22:20.351 }, 00:22:20.351 { 00:22:20.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.351 "dma_device_type": 2 00:22:20.351 } 00:22:20.351 ], 00:22:20.351 "driver_specific": {} 00:22:20.351 }' 00:22:20.351 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.351 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.351 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.351 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.610 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:20.610 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:20.610 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.610 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:20.610 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:20.610 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.610 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:20.610 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:20.610 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:20.610 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:20.610 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:20.868 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:20.868 "name": "BaseBdev2", 00:22:20.868 "aliases": [ 00:22:20.868 "d1f47c32-6045-412d-9eb5-2d7e68406387" 00:22:20.868 ], 00:22:20.868 "product_name": "Malloc disk", 00:22:20.868 "block_size": 512, 00:22:20.868 "num_blocks": 65536, 00:22:20.868 "uuid": "d1f47c32-6045-412d-9eb5-2d7e68406387", 00:22:20.868 "assigned_rate_limits": { 00:22:20.868 "rw_ios_per_sec": 0, 00:22:20.868 "rw_mbytes_per_sec": 0, 00:22:20.868 "r_mbytes_per_sec": 0, 00:22:20.868 "w_mbytes_per_sec": 0 00:22:20.868 }, 00:22:20.868 "claimed": true, 00:22:20.868 "claim_type": "exclusive_write", 00:22:20.868 "zoned": false, 00:22:20.868 "supported_io_types": { 00:22:20.868 "read": true, 00:22:20.868 "write": true, 00:22:20.868 "unmap": true, 00:22:20.868 "flush": true, 00:22:20.868 "reset": true, 00:22:20.868 "nvme_admin": false, 00:22:20.868 "nvme_io": false, 00:22:20.868 "nvme_io_md": false, 00:22:20.868 "write_zeroes": true, 00:22:20.868 "zcopy": true, 00:22:20.868 "get_zone_info": false, 00:22:20.868 "zone_management": false, 00:22:20.868 "zone_append": false, 00:22:20.868 "compare": false, 00:22:20.868 "compare_and_write": false, 00:22:20.868 "abort": true, 00:22:20.868 "seek_hole": false, 00:22:20.868 "seek_data": false, 00:22:20.868 "copy": true, 00:22:20.868 "nvme_iov_md": false 00:22:20.868 }, 00:22:20.868 "memory_domains": [ 00:22:20.868 { 00:22:20.868 "dma_device_id": "system", 00:22:20.868 "dma_device_type": 1 00:22:20.868 }, 00:22:20.868 { 00:22:20.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:20.868 "dma_device_type": 2 00:22:20.868 } 00:22:20.868 ], 00:22:20.868 "driver_specific": {} 00:22:20.868 }' 00:22:20.868 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.868 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:20.868 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:20.868 11:04:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.126 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.126 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.126 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.126 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.126 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.126 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.126 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.126 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:21.126 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:21.126 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:21.126 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:21.384 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:21.384 "name": "BaseBdev3", 00:22:21.384 "aliases": [ 00:22:21.384 "d2d0404c-0f4c-439d-86da-71383488ca2b" 00:22:21.384 ], 00:22:21.384 "product_name": "Malloc disk", 00:22:21.384 "block_size": 512, 00:22:21.384 "num_blocks": 65536, 00:22:21.384 "uuid": "d2d0404c-0f4c-439d-86da-71383488ca2b", 00:22:21.384 "assigned_rate_limits": { 00:22:21.384 "rw_ios_per_sec": 0, 00:22:21.384 "rw_mbytes_per_sec": 0, 00:22:21.384 "r_mbytes_per_sec": 0, 00:22:21.384 "w_mbytes_per_sec": 0 00:22:21.384 }, 00:22:21.384 "claimed": true, 00:22:21.384 "claim_type": "exclusive_write", 00:22:21.384 "zoned": false, 00:22:21.384 "supported_io_types": { 00:22:21.384 "read": true, 00:22:21.384 "write": true, 00:22:21.384 "unmap": true, 00:22:21.384 "flush": true, 00:22:21.384 "reset": true, 00:22:21.384 "nvme_admin": false, 00:22:21.384 "nvme_io": false, 00:22:21.384 "nvme_io_md": false, 00:22:21.384 "write_zeroes": true, 00:22:21.384 "zcopy": true, 00:22:21.384 "get_zone_info": false, 00:22:21.384 "zone_management": false, 00:22:21.384 "zone_append": false, 00:22:21.384 "compare": false, 00:22:21.384 "compare_and_write": false, 00:22:21.384 "abort": true, 00:22:21.384 "seek_hole": false, 00:22:21.384 "seek_data": false, 00:22:21.384 "copy": true, 00:22:21.384 "nvme_iov_md": false 00:22:21.384 }, 00:22:21.384 "memory_domains": [ 00:22:21.384 { 00:22:21.384 "dma_device_id": "system", 00:22:21.384 "dma_device_type": 1 00:22:21.384 }, 00:22:21.384 { 00:22:21.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.384 "dma_device_type": 2 00:22:21.384 } 00:22:21.384 ], 00:22:21.384 "driver_specific": {} 00:22:21.384 }' 00:22:21.384 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.642 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:21.642 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:21.642 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.642 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:21.642 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:21.642 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.642 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:21.642 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:21.642 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.642 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:21.900 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:21.900 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:21.900 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:21.900 11:04:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:21.900 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:21.900 "name": "BaseBdev4", 00:22:21.900 "aliases": [ 00:22:21.900 "323d7cc3-7087-4e3d-ba32-cb12414c3b89" 00:22:21.900 ], 00:22:21.900 "product_name": "Malloc disk", 00:22:21.900 "block_size": 512, 00:22:21.900 "num_blocks": 65536, 00:22:21.900 "uuid": "323d7cc3-7087-4e3d-ba32-cb12414c3b89", 00:22:21.900 "assigned_rate_limits": { 00:22:21.900 "rw_ios_per_sec": 0, 00:22:21.900 "rw_mbytes_per_sec": 0, 00:22:21.900 "r_mbytes_per_sec": 0, 00:22:21.900 "w_mbytes_per_sec": 0 00:22:21.900 }, 00:22:21.900 "claimed": true, 00:22:21.900 "claim_type": "exclusive_write", 00:22:21.900 "zoned": false, 00:22:21.900 "supported_io_types": { 00:22:21.900 "read": true, 00:22:21.900 "write": true, 00:22:21.900 "unmap": true, 00:22:21.900 "flush": true, 00:22:21.900 "reset": true, 00:22:21.900 "nvme_admin": false, 00:22:21.900 "nvme_io": false, 00:22:21.900 "nvme_io_md": false, 00:22:21.900 "write_zeroes": true, 00:22:21.900 "zcopy": true, 00:22:21.900 "get_zone_info": false, 00:22:21.900 "zone_management": false, 00:22:21.900 "zone_append": false, 00:22:21.900 "compare": false, 00:22:21.900 "compare_and_write": false, 00:22:21.900 "abort": true, 00:22:21.900 "seek_hole": false, 00:22:21.900 "seek_data": false, 00:22:21.900 "copy": true, 00:22:21.900 "nvme_iov_md": false 00:22:21.900 }, 00:22:21.900 "memory_domains": [ 00:22:21.900 { 00:22:21.900 "dma_device_id": "system", 00:22:21.900 "dma_device_type": 1 00:22:21.900 }, 00:22:21.900 { 00:22:21.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.900 "dma_device_type": 2 00:22:21.900 } 00:22:21.900 ], 00:22:21.900 "driver_specific": {} 00:22:21.900 }' 00:22:21.900 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:22.157 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:22.157 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:22.157 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:22.157 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:22.157 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:22.158 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:22.158 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:22.158 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:22.158 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:22.415 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:22.415 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:22.415 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:22.674 [2024-07-25 11:04:29.551797] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:22.674 [2024-07-25 11:04:29.551831] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:22.674 [2024-07-25 11:04:29.551913] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:22.674 [2024-07-25 11:04:29.551991] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:22.674 [2024-07-25 11:04:29.552008] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:22.674 11:04:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3635055 00:22:22.674 11:04:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3635055 ']' 00:22:22.674 11:04:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3635055 00:22:22.674 11:04:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:22:22.674 11:04:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.674 11:04:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3635055 00:22:22.674 11:04:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:22.674 11:04:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:22.674 11:04:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3635055' 00:22:22.674 killing process with pid 3635055 00:22:22.674 11:04:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3635055 00:22:22.674 [2024-07-25 11:04:29.624946] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:22.674 11:04:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3635055 00:22:23.240 [2024-07-25 11:04:30.080843] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:25.139 00:22:25.139 real 0m32.997s 00:22:25.139 user 0m57.810s 00:22:25.139 sys 0m5.579s 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:25.139 ************************************ 00:22:25.139 END TEST raid_state_function_test 00:22:25.139 ************************************ 00:22:25.139 11:04:31 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:22:25.139 11:04:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:25.139 11:04:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.139 11:04:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:25.139 ************************************ 00:22:25.139 START TEST raid_state_function_test_sb 00:22:25.139 ************************************ 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3641290 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3641290' 00:22:25.139 Process raid pid: 3641290 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3641290 /var/tmp/spdk-raid.sock 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3641290 ']' 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:25.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:25.139 11:04:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:25.139 [2024-07-25 11:04:31.984197] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:25.139 [2024-07-25 11:04:31.984314] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.139 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.139 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.139 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.139 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.139 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.139 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.139 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.139 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.139 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.139 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.139 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.139 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:25.139 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:25.140 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:25.140 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:25.140 [2024-07-25 11:04:32.211755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.399 [2024-07-25 11:04:32.504259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.966 [2024-07-25 11:04:32.865719] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:25.966 [2024-07-25 11:04:32.865752] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:25.966 11:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.966 11:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:22:25.966 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:26.225 [2024-07-25 11:04:33.269458] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:26.225 [2024-07-25 11:04:33.269511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:26.225 [2024-07-25 11:04:33.269525] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:26.225 [2024-07-25 11:04:33.269542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:26.225 [2024-07-25 11:04:33.269553] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:26.225 [2024-07-25 11:04:33.269569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:26.225 [2024-07-25 11:04:33.269580] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:26.225 [2024-07-25 11:04:33.269595] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:26.225 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:26.225 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:26.225 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:26.225 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:26.225 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:26.225 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:26.225 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.225 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.225 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.225 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.225 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.225 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.483 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:26.483 "name": "Existed_Raid", 00:22:26.483 "uuid": "8e567e84-4c2c-493c-89b1-1a999ed175d1", 00:22:26.483 "strip_size_kb": 64, 00:22:26.483 "state": "configuring", 00:22:26.483 "raid_level": "raid0", 00:22:26.483 "superblock": true, 00:22:26.483 "num_base_bdevs": 4, 00:22:26.483 "num_base_bdevs_discovered": 0, 00:22:26.483 "num_base_bdevs_operational": 4, 00:22:26.483 "base_bdevs_list": [ 00:22:26.483 { 00:22:26.483 "name": "BaseBdev1", 00:22:26.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.483 "is_configured": false, 00:22:26.483 "data_offset": 0, 00:22:26.483 "data_size": 0 00:22:26.483 }, 00:22:26.483 { 00:22:26.483 "name": "BaseBdev2", 00:22:26.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.483 "is_configured": false, 00:22:26.483 "data_offset": 0, 00:22:26.483 "data_size": 0 00:22:26.483 }, 00:22:26.483 { 00:22:26.483 "name": "BaseBdev3", 00:22:26.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.483 "is_configured": false, 00:22:26.483 "data_offset": 0, 00:22:26.483 "data_size": 0 00:22:26.483 }, 00:22:26.483 { 00:22:26.483 "name": "BaseBdev4", 00:22:26.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:26.483 "is_configured": false, 00:22:26.483 "data_offset": 0, 00:22:26.483 "data_size": 0 00:22:26.483 } 00:22:26.483 ] 00:22:26.483 }' 00:22:26.483 11:04:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:26.483 11:04:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.050 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:27.308 [2024-07-25 11:04:34.247921] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:27.308 [2024-07-25 11:04:34.247966] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:27.308 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:27.308 [2024-07-25 11:04:34.408429] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:27.308 [2024-07-25 11:04:34.408476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:27.308 [2024-07-25 11:04:34.408490] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:27.308 [2024-07-25 11:04:34.408514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:27.308 [2024-07-25 11:04:34.408526] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:27.308 [2024-07-25 11:04:34.408541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:27.308 [2024-07-25 11:04:34.408553] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:27.308 [2024-07-25 11:04:34.408568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:27.567 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:27.567 [2024-07-25 11:04:34.629735] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:27.567 BaseBdev1 00:22:27.567 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:27.567 11:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:27.567 11:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:27.567 11:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:27.567 11:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:27.567 11:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:27.567 11:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:27.825 11:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:28.084 [ 00:22:28.084 { 00:22:28.084 "name": "BaseBdev1", 00:22:28.084 "aliases": [ 00:22:28.084 "2f74013c-9c6b-43d9-ac6d-ddefdb0d0e95" 00:22:28.084 ], 00:22:28.084 "product_name": "Malloc disk", 00:22:28.084 "block_size": 512, 00:22:28.084 "num_blocks": 65536, 00:22:28.084 "uuid": "2f74013c-9c6b-43d9-ac6d-ddefdb0d0e95", 00:22:28.084 "assigned_rate_limits": { 00:22:28.084 "rw_ios_per_sec": 0, 00:22:28.084 "rw_mbytes_per_sec": 0, 00:22:28.084 "r_mbytes_per_sec": 0, 00:22:28.084 "w_mbytes_per_sec": 0 00:22:28.084 }, 00:22:28.084 "claimed": true, 00:22:28.084 "claim_type": "exclusive_write", 00:22:28.084 "zoned": false, 00:22:28.084 "supported_io_types": { 00:22:28.084 "read": true, 00:22:28.084 "write": true, 00:22:28.084 "unmap": true, 00:22:28.084 "flush": true, 00:22:28.084 "reset": true, 00:22:28.084 "nvme_admin": false, 00:22:28.084 "nvme_io": false, 00:22:28.084 "nvme_io_md": false, 00:22:28.084 "write_zeroes": true, 00:22:28.084 "zcopy": true, 00:22:28.084 "get_zone_info": false, 00:22:28.084 "zone_management": false, 00:22:28.084 "zone_append": false, 00:22:28.084 "compare": false, 00:22:28.084 "compare_and_write": false, 00:22:28.084 "abort": true, 00:22:28.084 "seek_hole": false, 00:22:28.084 "seek_data": false, 00:22:28.084 "copy": true, 00:22:28.084 "nvme_iov_md": false 00:22:28.084 }, 00:22:28.084 "memory_domains": [ 00:22:28.084 { 00:22:28.084 "dma_device_id": "system", 00:22:28.084 "dma_device_type": 1 00:22:28.084 }, 00:22:28.084 { 00:22:28.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.084 "dma_device_type": 2 00:22:28.084 } 00:22:28.084 ], 00:22:28.084 "driver_specific": {} 00:22:28.084 } 00:22:28.084 ] 00:22:28.084 11:04:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:28.084 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:28.084 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.084 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:28.084 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:28.084 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:28.084 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:28.084 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.084 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.084 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.084 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.085 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.085 11:04:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.085 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.085 "name": "Existed_Raid", 00:22:28.085 "uuid": "8f406d64-3f88-4572-a7d9-7bbe0f4028d8", 00:22:28.085 "strip_size_kb": 64, 00:22:28.085 "state": "configuring", 00:22:28.085 "raid_level": "raid0", 00:22:28.085 "superblock": true, 00:22:28.085 "num_base_bdevs": 4, 00:22:28.085 "num_base_bdevs_discovered": 1, 00:22:28.085 "num_base_bdevs_operational": 4, 00:22:28.085 "base_bdevs_list": [ 00:22:28.085 { 00:22:28.085 "name": "BaseBdev1", 00:22:28.085 "uuid": "2f74013c-9c6b-43d9-ac6d-ddefdb0d0e95", 00:22:28.085 "is_configured": true, 00:22:28.085 "data_offset": 2048, 00:22:28.085 "data_size": 63488 00:22:28.085 }, 00:22:28.085 { 00:22:28.085 "name": "BaseBdev2", 00:22:28.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.085 "is_configured": false, 00:22:28.085 "data_offset": 0, 00:22:28.085 "data_size": 0 00:22:28.085 }, 00:22:28.085 { 00:22:28.085 "name": "BaseBdev3", 00:22:28.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.085 "is_configured": false, 00:22:28.085 "data_offset": 0, 00:22:28.085 "data_size": 0 00:22:28.085 }, 00:22:28.085 { 00:22:28.085 "name": "BaseBdev4", 00:22:28.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.085 "is_configured": false, 00:22:28.085 "data_offset": 0, 00:22:28.085 "data_size": 0 00:22:28.085 } 00:22:28.085 ] 00:22:28.085 }' 00:22:28.085 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.085 11:04:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:28.687 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:28.687 [2024-07-25 11:04:35.804936] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:28.687 [2024-07-25 11:04:35.804989] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:28.946 [2024-07-25 11:04:35.973664] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:28.946 [2024-07-25 11:04:35.975949] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:28.946 [2024-07-25 11:04:35.975991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:28.946 [2024-07-25 11:04:35.976006] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:28.946 [2024-07-25 11:04:35.976022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:28.946 [2024-07-25 11:04:35.976034] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:28.946 [2024-07-25 11:04:35.976052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.946 11:04:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:29.204 11:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:29.204 "name": "Existed_Raid", 00:22:29.204 "uuid": "c681f03c-19ec-4a4c-b205-2ed29aba44b3", 00:22:29.204 "strip_size_kb": 64, 00:22:29.204 "state": "configuring", 00:22:29.204 "raid_level": "raid0", 00:22:29.204 "superblock": true, 00:22:29.204 "num_base_bdevs": 4, 00:22:29.204 "num_base_bdevs_discovered": 1, 00:22:29.204 "num_base_bdevs_operational": 4, 00:22:29.204 "base_bdevs_list": [ 00:22:29.204 { 00:22:29.204 "name": "BaseBdev1", 00:22:29.204 "uuid": "2f74013c-9c6b-43d9-ac6d-ddefdb0d0e95", 00:22:29.204 "is_configured": true, 00:22:29.204 "data_offset": 2048, 00:22:29.204 "data_size": 63488 00:22:29.204 }, 00:22:29.204 { 00:22:29.204 "name": "BaseBdev2", 00:22:29.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.204 "is_configured": false, 00:22:29.204 "data_offset": 0, 00:22:29.204 "data_size": 0 00:22:29.204 }, 00:22:29.204 { 00:22:29.204 "name": "BaseBdev3", 00:22:29.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.204 "is_configured": false, 00:22:29.204 "data_offset": 0, 00:22:29.204 "data_size": 0 00:22:29.204 }, 00:22:29.204 { 00:22:29.204 "name": "BaseBdev4", 00:22:29.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:29.204 "is_configured": false, 00:22:29.204 "data_offset": 0, 00:22:29.204 "data_size": 0 00:22:29.204 } 00:22:29.204 ] 00:22:29.204 }' 00:22:29.204 11:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:29.204 11:04:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.768 11:04:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:30.025 [2024-07-25 11:04:37.056979] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:30.025 BaseBdev2 00:22:30.025 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:30.025 11:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:30.025 11:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:30.025 11:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:30.025 11:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:30.025 11:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:30.025 11:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:30.283 11:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:30.283 [ 00:22:30.283 { 00:22:30.283 "name": "BaseBdev2", 00:22:30.283 "aliases": [ 00:22:30.283 "f8419620-2633-4522-b2b4-ce65266270ef" 00:22:30.283 ], 00:22:30.283 "product_name": "Malloc disk", 00:22:30.283 "block_size": 512, 00:22:30.283 "num_blocks": 65536, 00:22:30.283 "uuid": "f8419620-2633-4522-b2b4-ce65266270ef", 00:22:30.283 "assigned_rate_limits": { 00:22:30.283 "rw_ios_per_sec": 0, 00:22:30.283 "rw_mbytes_per_sec": 0, 00:22:30.283 "r_mbytes_per_sec": 0, 00:22:30.283 "w_mbytes_per_sec": 0 00:22:30.283 }, 00:22:30.283 "claimed": true, 00:22:30.283 "claim_type": "exclusive_write", 00:22:30.283 "zoned": false, 00:22:30.283 "supported_io_types": { 00:22:30.283 "read": true, 00:22:30.283 "write": true, 00:22:30.283 "unmap": true, 00:22:30.283 "flush": true, 00:22:30.283 "reset": true, 00:22:30.283 "nvme_admin": false, 00:22:30.283 "nvme_io": false, 00:22:30.283 "nvme_io_md": false, 00:22:30.283 "write_zeroes": true, 00:22:30.283 "zcopy": true, 00:22:30.283 "get_zone_info": false, 00:22:30.283 "zone_management": false, 00:22:30.283 "zone_append": false, 00:22:30.283 "compare": false, 00:22:30.283 "compare_and_write": false, 00:22:30.283 "abort": true, 00:22:30.283 "seek_hole": false, 00:22:30.283 "seek_data": false, 00:22:30.283 "copy": true, 00:22:30.283 "nvme_iov_md": false 00:22:30.283 }, 00:22:30.283 "memory_domains": [ 00:22:30.283 { 00:22:30.283 "dma_device_id": "system", 00:22:30.283 "dma_device_type": 1 00:22:30.283 }, 00:22:30.283 { 00:22:30.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.283 "dma_device_type": 2 00:22:30.283 } 00:22:30.283 ], 00:22:30.283 "driver_specific": {} 00:22:30.283 } 00:22:30.283 ] 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:30.542 "name": "Existed_Raid", 00:22:30.542 "uuid": "c681f03c-19ec-4a4c-b205-2ed29aba44b3", 00:22:30.542 "strip_size_kb": 64, 00:22:30.542 "state": "configuring", 00:22:30.542 "raid_level": "raid0", 00:22:30.542 "superblock": true, 00:22:30.542 "num_base_bdevs": 4, 00:22:30.542 "num_base_bdevs_discovered": 2, 00:22:30.542 "num_base_bdevs_operational": 4, 00:22:30.542 "base_bdevs_list": [ 00:22:30.542 { 00:22:30.542 "name": "BaseBdev1", 00:22:30.542 "uuid": "2f74013c-9c6b-43d9-ac6d-ddefdb0d0e95", 00:22:30.542 "is_configured": true, 00:22:30.542 "data_offset": 2048, 00:22:30.542 "data_size": 63488 00:22:30.542 }, 00:22:30.542 { 00:22:30.542 "name": "BaseBdev2", 00:22:30.542 "uuid": "f8419620-2633-4522-b2b4-ce65266270ef", 00:22:30.542 "is_configured": true, 00:22:30.542 "data_offset": 2048, 00:22:30.542 "data_size": 63488 00:22:30.542 }, 00:22:30.542 { 00:22:30.542 "name": "BaseBdev3", 00:22:30.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.542 "is_configured": false, 00:22:30.542 "data_offset": 0, 00:22:30.542 "data_size": 0 00:22:30.542 }, 00:22:30.542 { 00:22:30.542 "name": "BaseBdev4", 00:22:30.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.542 "is_configured": false, 00:22:30.542 "data_offset": 0, 00:22:30.542 "data_size": 0 00:22:30.542 } 00:22:30.542 ] 00:22:30.542 }' 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:30.542 11:04:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:31.476 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:31.476 [2024-07-25 11:04:38.500359] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:31.476 BaseBdev3 00:22:31.476 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:31.476 11:04:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:31.476 11:04:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:31.476 11:04:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:31.476 11:04:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:31.476 11:04:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:31.476 11:04:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:31.734 11:04:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:31.992 [ 00:22:31.992 { 00:22:31.992 "name": "BaseBdev3", 00:22:31.992 "aliases": [ 00:22:31.992 "508d159b-1b0a-451f-b5ca-99fa284a678d" 00:22:31.992 ], 00:22:31.992 "product_name": "Malloc disk", 00:22:31.993 "block_size": 512, 00:22:31.993 "num_blocks": 65536, 00:22:31.993 "uuid": "508d159b-1b0a-451f-b5ca-99fa284a678d", 00:22:31.993 "assigned_rate_limits": { 00:22:31.993 "rw_ios_per_sec": 0, 00:22:31.993 "rw_mbytes_per_sec": 0, 00:22:31.993 "r_mbytes_per_sec": 0, 00:22:31.993 "w_mbytes_per_sec": 0 00:22:31.993 }, 00:22:31.993 "claimed": true, 00:22:31.993 "claim_type": "exclusive_write", 00:22:31.993 "zoned": false, 00:22:31.993 "supported_io_types": { 00:22:31.993 "read": true, 00:22:31.993 "write": true, 00:22:31.993 "unmap": true, 00:22:31.993 "flush": true, 00:22:31.993 "reset": true, 00:22:31.993 "nvme_admin": false, 00:22:31.993 "nvme_io": false, 00:22:31.993 "nvme_io_md": false, 00:22:31.993 "write_zeroes": true, 00:22:31.993 "zcopy": true, 00:22:31.993 "get_zone_info": false, 00:22:31.993 "zone_management": false, 00:22:31.993 "zone_append": false, 00:22:31.993 "compare": false, 00:22:31.993 "compare_and_write": false, 00:22:31.993 "abort": true, 00:22:31.993 "seek_hole": false, 00:22:31.993 "seek_data": false, 00:22:31.993 "copy": true, 00:22:31.993 "nvme_iov_md": false 00:22:31.993 }, 00:22:31.993 "memory_domains": [ 00:22:31.993 { 00:22:31.993 "dma_device_id": "system", 00:22:31.993 "dma_device_type": 1 00:22:31.993 }, 00:22:31.993 { 00:22:31.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.993 "dma_device_type": 2 00:22:31.993 } 00:22:31.993 ], 00:22:31.993 "driver_specific": {} 00:22:31.993 } 00:22:31.993 ] 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.993 11:04:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.251 11:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.251 "name": "Existed_Raid", 00:22:32.251 "uuid": "c681f03c-19ec-4a4c-b205-2ed29aba44b3", 00:22:32.251 "strip_size_kb": 64, 00:22:32.251 "state": "configuring", 00:22:32.251 "raid_level": "raid0", 00:22:32.251 "superblock": true, 00:22:32.251 "num_base_bdevs": 4, 00:22:32.251 "num_base_bdevs_discovered": 3, 00:22:32.251 "num_base_bdevs_operational": 4, 00:22:32.252 "base_bdevs_list": [ 00:22:32.252 { 00:22:32.252 "name": "BaseBdev1", 00:22:32.252 "uuid": "2f74013c-9c6b-43d9-ac6d-ddefdb0d0e95", 00:22:32.252 "is_configured": true, 00:22:32.252 "data_offset": 2048, 00:22:32.252 "data_size": 63488 00:22:32.252 }, 00:22:32.252 { 00:22:32.252 "name": "BaseBdev2", 00:22:32.252 "uuid": "f8419620-2633-4522-b2b4-ce65266270ef", 00:22:32.252 "is_configured": true, 00:22:32.252 "data_offset": 2048, 00:22:32.252 "data_size": 63488 00:22:32.252 }, 00:22:32.252 { 00:22:32.252 "name": "BaseBdev3", 00:22:32.252 "uuid": "508d159b-1b0a-451f-b5ca-99fa284a678d", 00:22:32.252 "is_configured": true, 00:22:32.252 "data_offset": 2048, 00:22:32.252 "data_size": 63488 00:22:32.252 }, 00:22:32.252 { 00:22:32.252 "name": "BaseBdev4", 00:22:32.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.252 "is_configured": false, 00:22:32.252 "data_offset": 0, 00:22:32.252 "data_size": 0 00:22:32.252 } 00:22:32.252 ] 00:22:32.252 }' 00:22:32.252 11:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.252 11:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:32.818 11:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:33.076 [2024-07-25 11:04:39.981079] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:33.076 [2024-07-25 11:04:39.981350] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:33.076 [2024-07-25 11:04:39.981370] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:33.076 [2024-07-25 11:04:39.981701] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:22:33.076 [2024-07-25 11:04:39.981924] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:33.076 [2024-07-25 11:04:39.981942] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:33.076 [2024-07-25 11:04:39.982127] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.076 BaseBdev4 00:22:33.076 11:04:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:33.076 11:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:33.076 11:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:33.076 11:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:33.076 11:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:33.076 11:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:33.076 11:04:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:33.076 11:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:33.334 [ 00:22:33.334 { 00:22:33.334 "name": "BaseBdev4", 00:22:33.334 "aliases": [ 00:22:33.334 "23a4ca4f-d6e7-4c76-8adc-9d75b62df231" 00:22:33.334 ], 00:22:33.334 "product_name": "Malloc disk", 00:22:33.334 "block_size": 512, 00:22:33.334 "num_blocks": 65536, 00:22:33.334 "uuid": "23a4ca4f-d6e7-4c76-8adc-9d75b62df231", 00:22:33.334 "assigned_rate_limits": { 00:22:33.334 "rw_ios_per_sec": 0, 00:22:33.334 "rw_mbytes_per_sec": 0, 00:22:33.334 "r_mbytes_per_sec": 0, 00:22:33.334 "w_mbytes_per_sec": 0 00:22:33.334 }, 00:22:33.334 "claimed": true, 00:22:33.334 "claim_type": "exclusive_write", 00:22:33.334 "zoned": false, 00:22:33.334 "supported_io_types": { 00:22:33.334 "read": true, 00:22:33.334 "write": true, 00:22:33.334 "unmap": true, 00:22:33.334 "flush": true, 00:22:33.334 "reset": true, 00:22:33.334 "nvme_admin": false, 00:22:33.334 "nvme_io": false, 00:22:33.334 "nvme_io_md": false, 00:22:33.334 "write_zeroes": true, 00:22:33.334 "zcopy": true, 00:22:33.334 "get_zone_info": false, 00:22:33.334 "zone_management": false, 00:22:33.334 "zone_append": false, 00:22:33.334 "compare": false, 00:22:33.334 "compare_and_write": false, 00:22:33.334 "abort": true, 00:22:33.334 "seek_hole": false, 00:22:33.334 "seek_data": false, 00:22:33.334 "copy": true, 00:22:33.334 "nvme_iov_md": false 00:22:33.334 }, 00:22:33.334 "memory_domains": [ 00:22:33.334 { 00:22:33.334 "dma_device_id": "system", 00:22:33.334 "dma_device_type": 1 00:22:33.334 }, 00:22:33.334 { 00:22:33.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:33.334 "dma_device_type": 2 00:22:33.334 } 00:22:33.334 ], 00:22:33.334 "driver_specific": {} 00:22:33.334 } 00:22:33.334 ] 00:22:33.334 11:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:33.334 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:33.334 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:33.334 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:33.335 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:33.335 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:33.335 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:33.335 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:33.335 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:33.335 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:33.335 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:33.335 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:33.335 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:33.335 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.335 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:33.592 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.592 "name": "Existed_Raid", 00:22:33.592 "uuid": "c681f03c-19ec-4a4c-b205-2ed29aba44b3", 00:22:33.592 "strip_size_kb": 64, 00:22:33.592 "state": "online", 00:22:33.592 "raid_level": "raid0", 00:22:33.592 "superblock": true, 00:22:33.592 "num_base_bdevs": 4, 00:22:33.592 "num_base_bdevs_discovered": 4, 00:22:33.592 "num_base_bdevs_operational": 4, 00:22:33.592 "base_bdevs_list": [ 00:22:33.592 { 00:22:33.592 "name": "BaseBdev1", 00:22:33.592 "uuid": "2f74013c-9c6b-43d9-ac6d-ddefdb0d0e95", 00:22:33.592 "is_configured": true, 00:22:33.592 "data_offset": 2048, 00:22:33.592 "data_size": 63488 00:22:33.592 }, 00:22:33.592 { 00:22:33.592 "name": "BaseBdev2", 00:22:33.592 "uuid": "f8419620-2633-4522-b2b4-ce65266270ef", 00:22:33.592 "is_configured": true, 00:22:33.592 "data_offset": 2048, 00:22:33.592 "data_size": 63488 00:22:33.592 }, 00:22:33.592 { 00:22:33.592 "name": "BaseBdev3", 00:22:33.592 "uuid": "508d159b-1b0a-451f-b5ca-99fa284a678d", 00:22:33.592 "is_configured": true, 00:22:33.592 "data_offset": 2048, 00:22:33.592 "data_size": 63488 00:22:33.592 }, 00:22:33.592 { 00:22:33.592 "name": "BaseBdev4", 00:22:33.592 "uuid": "23a4ca4f-d6e7-4c76-8adc-9d75b62df231", 00:22:33.592 "is_configured": true, 00:22:33.592 "data_offset": 2048, 00:22:33.592 "data_size": 63488 00:22:33.592 } 00:22:33.592 ] 00:22:33.592 }' 00:22:33.592 11:04:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.592 11:04:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:34.158 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:34.158 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:34.158 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:34.158 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:34.158 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:34.158 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:34.158 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:34.158 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:34.418 [2024-07-25 11:04:41.393428] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:34.418 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:34.418 "name": "Existed_Raid", 00:22:34.418 "aliases": [ 00:22:34.418 "c681f03c-19ec-4a4c-b205-2ed29aba44b3" 00:22:34.418 ], 00:22:34.418 "product_name": "Raid Volume", 00:22:34.418 "block_size": 512, 00:22:34.418 "num_blocks": 253952, 00:22:34.418 "uuid": "c681f03c-19ec-4a4c-b205-2ed29aba44b3", 00:22:34.418 "assigned_rate_limits": { 00:22:34.418 "rw_ios_per_sec": 0, 00:22:34.418 "rw_mbytes_per_sec": 0, 00:22:34.418 "r_mbytes_per_sec": 0, 00:22:34.418 "w_mbytes_per_sec": 0 00:22:34.418 }, 00:22:34.418 "claimed": false, 00:22:34.418 "zoned": false, 00:22:34.418 "supported_io_types": { 00:22:34.418 "read": true, 00:22:34.418 "write": true, 00:22:34.418 "unmap": true, 00:22:34.418 "flush": true, 00:22:34.418 "reset": true, 00:22:34.418 "nvme_admin": false, 00:22:34.418 "nvme_io": false, 00:22:34.418 "nvme_io_md": false, 00:22:34.418 "write_zeroes": true, 00:22:34.418 "zcopy": false, 00:22:34.418 "get_zone_info": false, 00:22:34.418 "zone_management": false, 00:22:34.418 "zone_append": false, 00:22:34.418 "compare": false, 00:22:34.418 "compare_and_write": false, 00:22:34.418 "abort": false, 00:22:34.418 "seek_hole": false, 00:22:34.418 "seek_data": false, 00:22:34.418 "copy": false, 00:22:34.418 "nvme_iov_md": false 00:22:34.418 }, 00:22:34.418 "memory_domains": [ 00:22:34.418 { 00:22:34.418 "dma_device_id": "system", 00:22:34.418 "dma_device_type": 1 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.418 "dma_device_type": 2 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "dma_device_id": "system", 00:22:34.418 "dma_device_type": 1 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.418 "dma_device_type": 2 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "dma_device_id": "system", 00:22:34.418 "dma_device_type": 1 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.418 "dma_device_type": 2 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "dma_device_id": "system", 00:22:34.418 "dma_device_type": 1 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.418 "dma_device_type": 2 00:22:34.418 } 00:22:34.418 ], 00:22:34.418 "driver_specific": { 00:22:34.418 "raid": { 00:22:34.418 "uuid": "c681f03c-19ec-4a4c-b205-2ed29aba44b3", 00:22:34.418 "strip_size_kb": 64, 00:22:34.418 "state": "online", 00:22:34.418 "raid_level": "raid0", 00:22:34.418 "superblock": true, 00:22:34.418 "num_base_bdevs": 4, 00:22:34.418 "num_base_bdevs_discovered": 4, 00:22:34.418 "num_base_bdevs_operational": 4, 00:22:34.418 "base_bdevs_list": [ 00:22:34.418 { 00:22:34.418 "name": "BaseBdev1", 00:22:34.418 "uuid": "2f74013c-9c6b-43d9-ac6d-ddefdb0d0e95", 00:22:34.418 "is_configured": true, 00:22:34.418 "data_offset": 2048, 00:22:34.418 "data_size": 63488 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "name": "BaseBdev2", 00:22:34.418 "uuid": "f8419620-2633-4522-b2b4-ce65266270ef", 00:22:34.418 "is_configured": true, 00:22:34.418 "data_offset": 2048, 00:22:34.418 "data_size": 63488 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "name": "BaseBdev3", 00:22:34.418 "uuid": "508d159b-1b0a-451f-b5ca-99fa284a678d", 00:22:34.418 "is_configured": true, 00:22:34.418 "data_offset": 2048, 00:22:34.418 "data_size": 63488 00:22:34.418 }, 00:22:34.418 { 00:22:34.418 "name": "BaseBdev4", 00:22:34.418 "uuid": "23a4ca4f-d6e7-4c76-8adc-9d75b62df231", 00:22:34.418 "is_configured": true, 00:22:34.418 "data_offset": 2048, 00:22:34.418 "data_size": 63488 00:22:34.418 } 00:22:34.418 ] 00:22:34.418 } 00:22:34.418 } 00:22:34.418 }' 00:22:34.418 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:34.418 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:34.418 BaseBdev2 00:22:34.418 BaseBdev3 00:22:34.418 BaseBdev4' 00:22:34.418 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.418 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:34.418 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:34.677 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:34.677 "name": "BaseBdev1", 00:22:34.677 "aliases": [ 00:22:34.677 "2f74013c-9c6b-43d9-ac6d-ddefdb0d0e95" 00:22:34.677 ], 00:22:34.677 "product_name": "Malloc disk", 00:22:34.677 "block_size": 512, 00:22:34.677 "num_blocks": 65536, 00:22:34.677 "uuid": "2f74013c-9c6b-43d9-ac6d-ddefdb0d0e95", 00:22:34.677 "assigned_rate_limits": { 00:22:34.677 "rw_ios_per_sec": 0, 00:22:34.677 "rw_mbytes_per_sec": 0, 00:22:34.677 "r_mbytes_per_sec": 0, 00:22:34.677 "w_mbytes_per_sec": 0 00:22:34.677 }, 00:22:34.677 "claimed": true, 00:22:34.677 "claim_type": "exclusive_write", 00:22:34.678 "zoned": false, 00:22:34.678 "supported_io_types": { 00:22:34.678 "read": true, 00:22:34.678 "write": true, 00:22:34.678 "unmap": true, 00:22:34.678 "flush": true, 00:22:34.678 "reset": true, 00:22:34.678 "nvme_admin": false, 00:22:34.678 "nvme_io": false, 00:22:34.678 "nvme_io_md": false, 00:22:34.678 "write_zeroes": true, 00:22:34.678 "zcopy": true, 00:22:34.678 "get_zone_info": false, 00:22:34.678 "zone_management": false, 00:22:34.678 "zone_append": false, 00:22:34.678 "compare": false, 00:22:34.678 "compare_and_write": false, 00:22:34.678 "abort": true, 00:22:34.678 "seek_hole": false, 00:22:34.678 "seek_data": false, 00:22:34.678 "copy": true, 00:22:34.678 "nvme_iov_md": false 00:22:34.678 }, 00:22:34.678 "memory_domains": [ 00:22:34.678 { 00:22:34.678 "dma_device_id": "system", 00:22:34.678 "dma_device_type": 1 00:22:34.678 }, 00:22:34.678 { 00:22:34.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:34.678 "dma_device_type": 2 00:22:34.678 } 00:22:34.678 ], 00:22:34.678 "driver_specific": {} 00:22:34.678 }' 00:22:34.678 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.678 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:34.678 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:34.678 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.678 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:34.937 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:34.937 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.937 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:34.937 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:34.937 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.937 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:34.937 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:34.937 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:34.937 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:34.937 11:04:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:35.197 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:35.197 "name": "BaseBdev2", 00:22:35.197 "aliases": [ 00:22:35.197 "f8419620-2633-4522-b2b4-ce65266270ef" 00:22:35.197 ], 00:22:35.197 "product_name": "Malloc disk", 00:22:35.197 "block_size": 512, 00:22:35.197 "num_blocks": 65536, 00:22:35.197 "uuid": "f8419620-2633-4522-b2b4-ce65266270ef", 00:22:35.197 "assigned_rate_limits": { 00:22:35.197 "rw_ios_per_sec": 0, 00:22:35.197 "rw_mbytes_per_sec": 0, 00:22:35.197 "r_mbytes_per_sec": 0, 00:22:35.197 "w_mbytes_per_sec": 0 00:22:35.197 }, 00:22:35.197 "claimed": true, 00:22:35.197 "claim_type": "exclusive_write", 00:22:35.197 "zoned": false, 00:22:35.197 "supported_io_types": { 00:22:35.197 "read": true, 00:22:35.197 "write": true, 00:22:35.197 "unmap": true, 00:22:35.197 "flush": true, 00:22:35.197 "reset": true, 00:22:35.197 "nvme_admin": false, 00:22:35.197 "nvme_io": false, 00:22:35.197 "nvme_io_md": false, 00:22:35.197 "write_zeroes": true, 00:22:35.197 "zcopy": true, 00:22:35.197 "get_zone_info": false, 00:22:35.197 "zone_management": false, 00:22:35.197 "zone_append": false, 00:22:35.197 "compare": false, 00:22:35.197 "compare_and_write": false, 00:22:35.197 "abort": true, 00:22:35.197 "seek_hole": false, 00:22:35.197 "seek_data": false, 00:22:35.197 "copy": true, 00:22:35.197 "nvme_iov_md": false 00:22:35.197 }, 00:22:35.197 "memory_domains": [ 00:22:35.197 { 00:22:35.197 "dma_device_id": "system", 00:22:35.197 "dma_device_type": 1 00:22:35.197 }, 00:22:35.197 { 00:22:35.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.197 "dma_device_type": 2 00:22:35.197 } 00:22:35.197 ], 00:22:35.197 "driver_specific": {} 00:22:35.197 }' 00:22:35.197 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.197 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.197 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:35.197 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.457 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.457 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.457 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.457 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.457 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.457 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.457 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.457 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.457 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.457 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:35.457 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:35.716 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:35.716 "name": "BaseBdev3", 00:22:35.716 "aliases": [ 00:22:35.716 "508d159b-1b0a-451f-b5ca-99fa284a678d" 00:22:35.716 ], 00:22:35.716 "product_name": "Malloc disk", 00:22:35.716 "block_size": 512, 00:22:35.716 "num_blocks": 65536, 00:22:35.716 "uuid": "508d159b-1b0a-451f-b5ca-99fa284a678d", 00:22:35.716 "assigned_rate_limits": { 00:22:35.716 "rw_ios_per_sec": 0, 00:22:35.716 "rw_mbytes_per_sec": 0, 00:22:35.716 "r_mbytes_per_sec": 0, 00:22:35.716 "w_mbytes_per_sec": 0 00:22:35.716 }, 00:22:35.716 "claimed": true, 00:22:35.716 "claim_type": "exclusive_write", 00:22:35.716 "zoned": false, 00:22:35.716 "supported_io_types": { 00:22:35.716 "read": true, 00:22:35.716 "write": true, 00:22:35.716 "unmap": true, 00:22:35.716 "flush": true, 00:22:35.716 "reset": true, 00:22:35.716 "nvme_admin": false, 00:22:35.716 "nvme_io": false, 00:22:35.716 "nvme_io_md": false, 00:22:35.717 "write_zeroes": true, 00:22:35.717 "zcopy": true, 00:22:35.717 "get_zone_info": false, 00:22:35.717 "zone_management": false, 00:22:35.717 "zone_append": false, 00:22:35.717 "compare": false, 00:22:35.717 "compare_and_write": false, 00:22:35.717 "abort": true, 00:22:35.717 "seek_hole": false, 00:22:35.717 "seek_data": false, 00:22:35.717 "copy": true, 00:22:35.717 "nvme_iov_md": false 00:22:35.717 }, 00:22:35.717 "memory_domains": [ 00:22:35.717 { 00:22:35.717 "dma_device_id": "system", 00:22:35.717 "dma_device_type": 1 00:22:35.717 }, 00:22:35.717 { 00:22:35.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:35.717 "dma_device_type": 2 00:22:35.717 } 00:22:35.717 ], 00:22:35.717 "driver_specific": {} 00:22:35.717 }' 00:22:35.717 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.717 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:35.717 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:35.717 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.976 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:35.976 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:35.976 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.976 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:35.976 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:35.976 11:04:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.976 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:35.976 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:35.976 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:35.976 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:35.976 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:36.544 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:36.544 "name": "BaseBdev4", 00:22:36.544 "aliases": [ 00:22:36.544 "23a4ca4f-d6e7-4c76-8adc-9d75b62df231" 00:22:36.544 ], 00:22:36.544 "product_name": "Malloc disk", 00:22:36.544 "block_size": 512, 00:22:36.544 "num_blocks": 65536, 00:22:36.544 "uuid": "23a4ca4f-d6e7-4c76-8adc-9d75b62df231", 00:22:36.544 "assigned_rate_limits": { 00:22:36.544 "rw_ios_per_sec": 0, 00:22:36.544 "rw_mbytes_per_sec": 0, 00:22:36.544 "r_mbytes_per_sec": 0, 00:22:36.544 "w_mbytes_per_sec": 0 00:22:36.544 }, 00:22:36.544 "claimed": true, 00:22:36.544 "claim_type": "exclusive_write", 00:22:36.544 "zoned": false, 00:22:36.544 "supported_io_types": { 00:22:36.544 "read": true, 00:22:36.544 "write": true, 00:22:36.544 "unmap": true, 00:22:36.544 "flush": true, 00:22:36.544 "reset": true, 00:22:36.544 "nvme_admin": false, 00:22:36.544 "nvme_io": false, 00:22:36.544 "nvme_io_md": false, 00:22:36.544 "write_zeroes": true, 00:22:36.544 "zcopy": true, 00:22:36.544 "get_zone_info": false, 00:22:36.544 "zone_management": false, 00:22:36.544 "zone_append": false, 00:22:36.544 "compare": false, 00:22:36.544 "compare_and_write": false, 00:22:36.544 "abort": true, 00:22:36.544 "seek_hole": false, 00:22:36.544 "seek_data": false, 00:22:36.544 "copy": true, 00:22:36.544 "nvme_iov_md": false 00:22:36.544 }, 00:22:36.544 "memory_domains": [ 00:22:36.544 { 00:22:36.544 "dma_device_id": "system", 00:22:36.544 "dma_device_type": 1 00:22:36.544 }, 00:22:36.544 { 00:22:36.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.544 "dma_device_type": 2 00:22:36.544 } 00:22:36.544 ], 00:22:36.544 "driver_specific": {} 00:22:36.544 }' 00:22:36.544 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:36.544 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:36.544 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:36.544 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:36.802 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:36.802 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:36.802 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:36.802 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:36.802 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:36.802 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:36.802 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:36.802 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:36.803 11:04:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:37.062 [2024-07-25 11:04:44.076332] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:37.062 [2024-07-25 11:04:44.076372] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:37.062 [2024-07-25 11:04:44.076430] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.062 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:37.322 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:37.322 "name": "Existed_Raid", 00:22:37.322 "uuid": "c681f03c-19ec-4a4c-b205-2ed29aba44b3", 00:22:37.322 "strip_size_kb": 64, 00:22:37.322 "state": "offline", 00:22:37.322 "raid_level": "raid0", 00:22:37.322 "superblock": true, 00:22:37.322 "num_base_bdevs": 4, 00:22:37.322 "num_base_bdevs_discovered": 3, 00:22:37.322 "num_base_bdevs_operational": 3, 00:22:37.322 "base_bdevs_list": [ 00:22:37.322 { 00:22:37.322 "name": null, 00:22:37.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.322 "is_configured": false, 00:22:37.322 "data_offset": 2048, 00:22:37.322 "data_size": 63488 00:22:37.322 }, 00:22:37.322 { 00:22:37.322 "name": "BaseBdev2", 00:22:37.322 "uuid": "f8419620-2633-4522-b2b4-ce65266270ef", 00:22:37.322 "is_configured": true, 00:22:37.322 "data_offset": 2048, 00:22:37.322 "data_size": 63488 00:22:37.322 }, 00:22:37.322 { 00:22:37.322 "name": "BaseBdev3", 00:22:37.322 "uuid": "508d159b-1b0a-451f-b5ca-99fa284a678d", 00:22:37.322 "is_configured": true, 00:22:37.322 "data_offset": 2048, 00:22:37.322 "data_size": 63488 00:22:37.322 }, 00:22:37.322 { 00:22:37.322 "name": "BaseBdev4", 00:22:37.322 "uuid": "23a4ca4f-d6e7-4c76-8adc-9d75b62df231", 00:22:37.322 "is_configured": true, 00:22:37.322 "data_offset": 2048, 00:22:37.322 "data_size": 63488 00:22:37.322 } 00:22:37.322 ] 00:22:37.322 }' 00:22:37.322 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:37.322 11:04:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:37.891 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:37.891 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:37.891 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:37.891 11:04:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.150 11:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:38.150 11:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:38.150 11:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:38.410 [2024-07-25 11:04:45.346234] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:38.410 11:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:38.410 11:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:38.410 11:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.410 11:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:38.669 11:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:38.669 11:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:38.669 11:04:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:38.928 [2024-07-25 11:04:45.922765] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:39.188 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:39.188 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:39.188 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.188 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:39.188 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:39.188 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:39.188 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:39.447 [2024-07-25 11:04:46.487538] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:39.447 [2024-07-25 11:04:46.487591] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:39.706 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:39.706 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:39.706 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.706 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:39.966 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:39.966 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:39.966 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:39.966 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:39.966 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:39.966 11:04:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:39.966 BaseBdev2 00:22:40.225 11:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:40.225 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:22:40.225 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:40.225 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:40.225 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:40.225 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:40.225 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:40.225 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:40.485 [ 00:22:40.485 { 00:22:40.485 "name": "BaseBdev2", 00:22:40.485 "aliases": [ 00:22:40.485 "0be9cb5f-49ea-4c16-b744-8d840b24ad31" 00:22:40.485 ], 00:22:40.485 "product_name": "Malloc disk", 00:22:40.485 "block_size": 512, 00:22:40.485 "num_blocks": 65536, 00:22:40.485 "uuid": "0be9cb5f-49ea-4c16-b744-8d840b24ad31", 00:22:40.485 "assigned_rate_limits": { 00:22:40.485 "rw_ios_per_sec": 0, 00:22:40.485 "rw_mbytes_per_sec": 0, 00:22:40.485 "r_mbytes_per_sec": 0, 00:22:40.485 "w_mbytes_per_sec": 0 00:22:40.485 }, 00:22:40.485 "claimed": false, 00:22:40.485 "zoned": false, 00:22:40.485 "supported_io_types": { 00:22:40.485 "read": true, 00:22:40.485 "write": true, 00:22:40.485 "unmap": true, 00:22:40.485 "flush": true, 00:22:40.485 "reset": true, 00:22:40.485 "nvme_admin": false, 00:22:40.485 "nvme_io": false, 00:22:40.485 "nvme_io_md": false, 00:22:40.485 "write_zeroes": true, 00:22:40.485 "zcopy": true, 00:22:40.485 "get_zone_info": false, 00:22:40.485 "zone_management": false, 00:22:40.485 "zone_append": false, 00:22:40.485 "compare": false, 00:22:40.485 "compare_and_write": false, 00:22:40.485 "abort": true, 00:22:40.485 "seek_hole": false, 00:22:40.485 "seek_data": false, 00:22:40.485 "copy": true, 00:22:40.485 "nvme_iov_md": false 00:22:40.485 }, 00:22:40.485 "memory_domains": [ 00:22:40.485 { 00:22:40.485 "dma_device_id": "system", 00:22:40.485 "dma_device_type": 1 00:22:40.485 }, 00:22:40.485 { 00:22:40.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:40.485 "dma_device_type": 2 00:22:40.485 } 00:22:40.485 ], 00:22:40.485 "driver_specific": {} 00:22:40.485 } 00:22:40.485 ] 00:22:40.485 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:40.485 11:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:40.485 11:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:40.485 11:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:40.745 BaseBdev3 00:22:40.745 11:04:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:40.745 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:22:40.745 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:40.745 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:40.745 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:40.745 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:40.745 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:41.004 11:04:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:41.305 [ 00:22:41.305 { 00:22:41.305 "name": "BaseBdev3", 00:22:41.305 "aliases": [ 00:22:41.305 "aec187f2-3ca0-4448-8f87-0db71a7576ff" 00:22:41.305 ], 00:22:41.305 "product_name": "Malloc disk", 00:22:41.305 "block_size": 512, 00:22:41.305 "num_blocks": 65536, 00:22:41.305 "uuid": "aec187f2-3ca0-4448-8f87-0db71a7576ff", 00:22:41.305 "assigned_rate_limits": { 00:22:41.305 "rw_ios_per_sec": 0, 00:22:41.305 "rw_mbytes_per_sec": 0, 00:22:41.305 "r_mbytes_per_sec": 0, 00:22:41.305 "w_mbytes_per_sec": 0 00:22:41.305 }, 00:22:41.305 "claimed": false, 00:22:41.305 "zoned": false, 00:22:41.305 "supported_io_types": { 00:22:41.305 "read": true, 00:22:41.305 "write": true, 00:22:41.305 "unmap": true, 00:22:41.305 "flush": true, 00:22:41.305 "reset": true, 00:22:41.305 "nvme_admin": false, 00:22:41.305 "nvme_io": false, 00:22:41.305 "nvme_io_md": false, 00:22:41.305 "write_zeroes": true, 00:22:41.305 "zcopy": true, 00:22:41.305 "get_zone_info": false, 00:22:41.305 "zone_management": false, 00:22:41.305 "zone_append": false, 00:22:41.305 "compare": false, 00:22:41.305 "compare_and_write": false, 00:22:41.305 "abort": true, 00:22:41.305 "seek_hole": false, 00:22:41.305 "seek_data": false, 00:22:41.305 "copy": true, 00:22:41.305 "nvme_iov_md": false 00:22:41.305 }, 00:22:41.305 "memory_domains": [ 00:22:41.305 { 00:22:41.305 "dma_device_id": "system", 00:22:41.305 "dma_device_type": 1 00:22:41.305 }, 00:22:41.305 { 00:22:41.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.305 "dma_device_type": 2 00:22:41.305 } 00:22:41.305 ], 00:22:41.305 "driver_specific": {} 00:22:41.305 } 00:22:41.305 ] 00:22:41.305 11:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:41.305 11:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:41.305 11:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:41.305 11:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:41.305 BaseBdev4 00:22:41.578 11:04:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:41.578 11:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:22:41.578 11:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:41.578 11:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:41.578 11:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:41.578 11:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:41.578 11:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:41.836 11:04:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:42.096 [ 00:22:42.096 { 00:22:42.096 "name": "BaseBdev4", 00:22:42.096 "aliases": [ 00:22:42.096 "5bfa67ed-6781-4dc0-a817-1da4cf86f76d" 00:22:42.096 ], 00:22:42.096 "product_name": "Malloc disk", 00:22:42.096 "block_size": 512, 00:22:42.096 "num_blocks": 65536, 00:22:42.096 "uuid": "5bfa67ed-6781-4dc0-a817-1da4cf86f76d", 00:22:42.096 "assigned_rate_limits": { 00:22:42.096 "rw_ios_per_sec": 0, 00:22:42.096 "rw_mbytes_per_sec": 0, 00:22:42.096 "r_mbytes_per_sec": 0, 00:22:42.096 "w_mbytes_per_sec": 0 00:22:42.096 }, 00:22:42.096 "claimed": false, 00:22:42.096 "zoned": false, 00:22:42.096 "supported_io_types": { 00:22:42.096 "read": true, 00:22:42.096 "write": true, 00:22:42.096 "unmap": true, 00:22:42.096 "flush": true, 00:22:42.096 "reset": true, 00:22:42.096 "nvme_admin": false, 00:22:42.096 "nvme_io": false, 00:22:42.096 "nvme_io_md": false, 00:22:42.096 "write_zeroes": true, 00:22:42.096 "zcopy": true, 00:22:42.096 "get_zone_info": false, 00:22:42.096 "zone_management": false, 00:22:42.096 "zone_append": false, 00:22:42.096 "compare": false, 00:22:42.096 "compare_and_write": false, 00:22:42.096 "abort": true, 00:22:42.096 "seek_hole": false, 00:22:42.096 "seek_data": false, 00:22:42.096 "copy": true, 00:22:42.096 "nvme_iov_md": false 00:22:42.096 }, 00:22:42.096 "memory_domains": [ 00:22:42.096 { 00:22:42.096 "dma_device_id": "system", 00:22:42.096 "dma_device_type": 1 00:22:42.096 }, 00:22:42.096 { 00:22:42.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.096 "dma_device_type": 2 00:22:42.096 } 00:22:42.096 ], 00:22:42.096 "driver_specific": {} 00:22:42.096 } 00:22:42.096 ] 00:22:42.096 11:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:42.096 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:42.096 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:42.096 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:42.355 [2024-07-25 11:04:49.358073] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:42.355 [2024-07-25 11:04:49.358120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:42.355 [2024-07-25 11:04:49.358162] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:42.355 [2024-07-25 11:04:49.360471] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:42.355 [2024-07-25 11:04:49.360538] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:42.355 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:42.355 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:42.356 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:42.356 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:42.356 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:42.356 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:42.356 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:42.356 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:42.356 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:42.356 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:42.356 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.356 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.615 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:42.615 "name": "Existed_Raid", 00:22:42.615 "uuid": "3de0f5e5-b6fc-4f00-a5db-b3501d5b32e2", 00:22:42.615 "strip_size_kb": 64, 00:22:42.615 "state": "configuring", 00:22:42.615 "raid_level": "raid0", 00:22:42.615 "superblock": true, 00:22:42.615 "num_base_bdevs": 4, 00:22:42.615 "num_base_bdevs_discovered": 3, 00:22:42.615 "num_base_bdevs_operational": 4, 00:22:42.615 "base_bdevs_list": [ 00:22:42.615 { 00:22:42.615 "name": "BaseBdev1", 00:22:42.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.615 "is_configured": false, 00:22:42.615 "data_offset": 0, 00:22:42.615 "data_size": 0 00:22:42.615 }, 00:22:42.615 { 00:22:42.615 "name": "BaseBdev2", 00:22:42.615 "uuid": "0be9cb5f-49ea-4c16-b744-8d840b24ad31", 00:22:42.615 "is_configured": true, 00:22:42.615 "data_offset": 2048, 00:22:42.615 "data_size": 63488 00:22:42.615 }, 00:22:42.615 { 00:22:42.615 "name": "BaseBdev3", 00:22:42.615 "uuid": "aec187f2-3ca0-4448-8f87-0db71a7576ff", 00:22:42.615 "is_configured": true, 00:22:42.615 "data_offset": 2048, 00:22:42.615 "data_size": 63488 00:22:42.615 }, 00:22:42.615 { 00:22:42.615 "name": "BaseBdev4", 00:22:42.615 "uuid": "5bfa67ed-6781-4dc0-a817-1da4cf86f76d", 00:22:42.615 "is_configured": true, 00:22:42.615 "data_offset": 2048, 00:22:42.615 "data_size": 63488 00:22:42.615 } 00:22:42.615 ] 00:22:42.615 }' 00:22:42.615 11:04:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:42.615 11:04:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:43.185 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:43.444 [2024-07-25 11:04:50.368758] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:43.445 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:43.445 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:43.445 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:43.445 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:43.445 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:43.445 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:43.445 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:43.445 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:43.445 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:43.445 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:43.445 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.445 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.704 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:43.704 "name": "Existed_Raid", 00:22:43.704 "uuid": "3de0f5e5-b6fc-4f00-a5db-b3501d5b32e2", 00:22:43.704 "strip_size_kb": 64, 00:22:43.704 "state": "configuring", 00:22:43.704 "raid_level": "raid0", 00:22:43.704 "superblock": true, 00:22:43.704 "num_base_bdevs": 4, 00:22:43.704 "num_base_bdevs_discovered": 2, 00:22:43.704 "num_base_bdevs_operational": 4, 00:22:43.704 "base_bdevs_list": [ 00:22:43.704 { 00:22:43.704 "name": "BaseBdev1", 00:22:43.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.704 "is_configured": false, 00:22:43.704 "data_offset": 0, 00:22:43.704 "data_size": 0 00:22:43.704 }, 00:22:43.704 { 00:22:43.704 "name": null, 00:22:43.704 "uuid": "0be9cb5f-49ea-4c16-b744-8d840b24ad31", 00:22:43.704 "is_configured": false, 00:22:43.704 "data_offset": 2048, 00:22:43.704 "data_size": 63488 00:22:43.704 }, 00:22:43.704 { 00:22:43.704 "name": "BaseBdev3", 00:22:43.704 "uuid": "aec187f2-3ca0-4448-8f87-0db71a7576ff", 00:22:43.704 "is_configured": true, 00:22:43.704 "data_offset": 2048, 00:22:43.704 "data_size": 63488 00:22:43.704 }, 00:22:43.704 { 00:22:43.704 "name": "BaseBdev4", 00:22:43.704 "uuid": "5bfa67ed-6781-4dc0-a817-1da4cf86f76d", 00:22:43.704 "is_configured": true, 00:22:43.704 "data_offset": 2048, 00:22:43.704 "data_size": 63488 00:22:43.704 } 00:22:43.704 ] 00:22:43.704 }' 00:22:43.704 11:04:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:43.704 11:04:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.272 11:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:44.272 11:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.272 11:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:44.273 11:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:44.532 [2024-07-25 11:04:51.630553] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:44.532 BaseBdev1 00:22:44.532 11:04:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:44.532 11:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:22:44.532 11:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:44.532 11:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:44.532 11:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:44.532 11:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:44.532 11:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:44.791 11:04:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:45.051 [ 00:22:45.051 { 00:22:45.051 "name": "BaseBdev1", 00:22:45.051 "aliases": [ 00:22:45.051 "4209e508-837a-448b-b70a-5d228f6f71a3" 00:22:45.051 ], 00:22:45.051 "product_name": "Malloc disk", 00:22:45.051 "block_size": 512, 00:22:45.051 "num_blocks": 65536, 00:22:45.051 "uuid": "4209e508-837a-448b-b70a-5d228f6f71a3", 00:22:45.051 "assigned_rate_limits": { 00:22:45.051 "rw_ios_per_sec": 0, 00:22:45.051 "rw_mbytes_per_sec": 0, 00:22:45.051 "r_mbytes_per_sec": 0, 00:22:45.051 "w_mbytes_per_sec": 0 00:22:45.051 }, 00:22:45.051 "claimed": true, 00:22:45.051 "claim_type": "exclusive_write", 00:22:45.051 "zoned": false, 00:22:45.051 "supported_io_types": { 00:22:45.051 "read": true, 00:22:45.051 "write": true, 00:22:45.051 "unmap": true, 00:22:45.051 "flush": true, 00:22:45.051 "reset": true, 00:22:45.051 "nvme_admin": false, 00:22:45.051 "nvme_io": false, 00:22:45.051 "nvme_io_md": false, 00:22:45.051 "write_zeroes": true, 00:22:45.051 "zcopy": true, 00:22:45.051 "get_zone_info": false, 00:22:45.051 "zone_management": false, 00:22:45.051 "zone_append": false, 00:22:45.051 "compare": false, 00:22:45.051 "compare_and_write": false, 00:22:45.051 "abort": true, 00:22:45.051 "seek_hole": false, 00:22:45.051 "seek_data": false, 00:22:45.051 "copy": true, 00:22:45.051 "nvme_iov_md": false 00:22:45.051 }, 00:22:45.051 "memory_domains": [ 00:22:45.051 { 00:22:45.051 "dma_device_id": "system", 00:22:45.051 "dma_device_type": 1 00:22:45.051 }, 00:22:45.051 { 00:22:45.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.051 "dma_device_type": 2 00:22:45.051 } 00:22:45.051 ], 00:22:45.051 "driver_specific": {} 00:22:45.051 } 00:22:45.051 ] 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.051 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.310 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.310 "name": "Existed_Raid", 00:22:45.310 "uuid": "3de0f5e5-b6fc-4f00-a5db-b3501d5b32e2", 00:22:45.310 "strip_size_kb": 64, 00:22:45.310 "state": "configuring", 00:22:45.310 "raid_level": "raid0", 00:22:45.311 "superblock": true, 00:22:45.311 "num_base_bdevs": 4, 00:22:45.311 "num_base_bdevs_discovered": 3, 00:22:45.311 "num_base_bdevs_operational": 4, 00:22:45.311 "base_bdevs_list": [ 00:22:45.311 { 00:22:45.311 "name": "BaseBdev1", 00:22:45.311 "uuid": "4209e508-837a-448b-b70a-5d228f6f71a3", 00:22:45.311 "is_configured": true, 00:22:45.311 "data_offset": 2048, 00:22:45.311 "data_size": 63488 00:22:45.311 }, 00:22:45.311 { 00:22:45.311 "name": null, 00:22:45.311 "uuid": "0be9cb5f-49ea-4c16-b744-8d840b24ad31", 00:22:45.311 "is_configured": false, 00:22:45.311 "data_offset": 2048, 00:22:45.311 "data_size": 63488 00:22:45.311 }, 00:22:45.311 { 00:22:45.311 "name": "BaseBdev3", 00:22:45.311 "uuid": "aec187f2-3ca0-4448-8f87-0db71a7576ff", 00:22:45.311 "is_configured": true, 00:22:45.311 "data_offset": 2048, 00:22:45.311 "data_size": 63488 00:22:45.311 }, 00:22:45.311 { 00:22:45.311 "name": "BaseBdev4", 00:22:45.311 "uuid": "5bfa67ed-6781-4dc0-a817-1da4cf86f76d", 00:22:45.311 "is_configured": true, 00:22:45.311 "data_offset": 2048, 00:22:45.311 "data_size": 63488 00:22:45.311 } 00:22:45.311 ] 00:22:45.311 }' 00:22:45.311 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.311 11:04:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.880 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.880 11:04:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:46.139 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:46.139 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:46.403 [2024-07-25 11:04:53.267102] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:46.403 "name": "Existed_Raid", 00:22:46.403 "uuid": "3de0f5e5-b6fc-4f00-a5db-b3501d5b32e2", 00:22:46.403 "strip_size_kb": 64, 00:22:46.403 "state": "configuring", 00:22:46.403 "raid_level": "raid0", 00:22:46.403 "superblock": true, 00:22:46.403 "num_base_bdevs": 4, 00:22:46.403 "num_base_bdevs_discovered": 2, 00:22:46.403 "num_base_bdevs_operational": 4, 00:22:46.403 "base_bdevs_list": [ 00:22:46.403 { 00:22:46.403 "name": "BaseBdev1", 00:22:46.403 "uuid": "4209e508-837a-448b-b70a-5d228f6f71a3", 00:22:46.403 "is_configured": true, 00:22:46.403 "data_offset": 2048, 00:22:46.403 "data_size": 63488 00:22:46.403 }, 00:22:46.403 { 00:22:46.403 "name": null, 00:22:46.403 "uuid": "0be9cb5f-49ea-4c16-b744-8d840b24ad31", 00:22:46.403 "is_configured": false, 00:22:46.403 "data_offset": 2048, 00:22:46.403 "data_size": 63488 00:22:46.403 }, 00:22:46.403 { 00:22:46.403 "name": null, 00:22:46.403 "uuid": "aec187f2-3ca0-4448-8f87-0db71a7576ff", 00:22:46.403 "is_configured": false, 00:22:46.403 "data_offset": 2048, 00:22:46.403 "data_size": 63488 00:22:46.403 }, 00:22:46.403 { 00:22:46.403 "name": "BaseBdev4", 00:22:46.403 "uuid": "5bfa67ed-6781-4dc0-a817-1da4cf86f76d", 00:22:46.403 "is_configured": true, 00:22:46.403 "data_offset": 2048, 00:22:46.403 "data_size": 63488 00:22:46.403 } 00:22:46.403 ] 00:22:46.403 }' 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:46.403 11:04:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:46.972 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.972 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:47.231 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:47.231 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:47.491 [2024-07-25 11:04:54.410259] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.491 "name": "Existed_Raid", 00:22:47.491 "uuid": "3de0f5e5-b6fc-4f00-a5db-b3501d5b32e2", 00:22:47.491 "strip_size_kb": 64, 00:22:47.491 "state": "configuring", 00:22:47.491 "raid_level": "raid0", 00:22:47.491 "superblock": true, 00:22:47.491 "num_base_bdevs": 4, 00:22:47.491 "num_base_bdevs_discovered": 3, 00:22:47.491 "num_base_bdevs_operational": 4, 00:22:47.491 "base_bdevs_list": [ 00:22:47.491 { 00:22:47.491 "name": "BaseBdev1", 00:22:47.491 "uuid": "4209e508-837a-448b-b70a-5d228f6f71a3", 00:22:47.491 "is_configured": true, 00:22:47.491 "data_offset": 2048, 00:22:47.491 "data_size": 63488 00:22:47.491 }, 00:22:47.491 { 00:22:47.491 "name": null, 00:22:47.491 "uuid": "0be9cb5f-49ea-4c16-b744-8d840b24ad31", 00:22:47.491 "is_configured": false, 00:22:47.491 "data_offset": 2048, 00:22:47.491 "data_size": 63488 00:22:47.491 }, 00:22:47.491 { 00:22:47.491 "name": "BaseBdev3", 00:22:47.491 "uuid": "aec187f2-3ca0-4448-8f87-0db71a7576ff", 00:22:47.491 "is_configured": true, 00:22:47.491 "data_offset": 2048, 00:22:47.491 "data_size": 63488 00:22:47.491 }, 00:22:47.491 { 00:22:47.491 "name": "BaseBdev4", 00:22:47.491 "uuid": "5bfa67ed-6781-4dc0-a817-1da4cf86f76d", 00:22:47.491 "is_configured": true, 00:22:47.491 "data_offset": 2048, 00:22:47.491 "data_size": 63488 00:22:47.491 } 00:22:47.491 ] 00:22:47.491 }' 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.491 11:04:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.428 11:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.428 11:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:48.687 11:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:48.687 11:04:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:48.946 [2024-07-25 11:04:55.882282] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:48.946 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:48.946 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:48.946 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:48.946 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:48.946 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:48.946 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:48.946 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:48.946 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:48.946 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:48.946 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:48.946 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.946 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.205 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:49.205 "name": "Existed_Raid", 00:22:49.205 "uuid": "3de0f5e5-b6fc-4f00-a5db-b3501d5b32e2", 00:22:49.205 "strip_size_kb": 64, 00:22:49.205 "state": "configuring", 00:22:49.205 "raid_level": "raid0", 00:22:49.205 "superblock": true, 00:22:49.205 "num_base_bdevs": 4, 00:22:49.205 "num_base_bdevs_discovered": 2, 00:22:49.205 "num_base_bdevs_operational": 4, 00:22:49.205 "base_bdevs_list": [ 00:22:49.205 { 00:22:49.205 "name": null, 00:22:49.205 "uuid": "4209e508-837a-448b-b70a-5d228f6f71a3", 00:22:49.205 "is_configured": false, 00:22:49.205 "data_offset": 2048, 00:22:49.205 "data_size": 63488 00:22:49.205 }, 00:22:49.205 { 00:22:49.205 "name": null, 00:22:49.205 "uuid": "0be9cb5f-49ea-4c16-b744-8d840b24ad31", 00:22:49.205 "is_configured": false, 00:22:49.205 "data_offset": 2048, 00:22:49.205 "data_size": 63488 00:22:49.205 }, 00:22:49.205 { 00:22:49.205 "name": "BaseBdev3", 00:22:49.206 "uuid": "aec187f2-3ca0-4448-8f87-0db71a7576ff", 00:22:49.206 "is_configured": true, 00:22:49.206 "data_offset": 2048, 00:22:49.206 "data_size": 63488 00:22:49.206 }, 00:22:49.206 { 00:22:49.206 "name": "BaseBdev4", 00:22:49.206 "uuid": "5bfa67ed-6781-4dc0-a817-1da4cf86f76d", 00:22:49.206 "is_configured": true, 00:22:49.206 "data_offset": 2048, 00:22:49.206 "data_size": 63488 00:22:49.206 } 00:22:49.206 ] 00:22:49.206 }' 00:22:49.206 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:49.206 11:04:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:49.774 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.774 11:04:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:50.033 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:50.033 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:50.291 [2024-07-25 11:04:57.292870] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:50.291 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:50.291 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:50.291 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:50.291 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:50.291 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:50.291 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:50.291 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.291 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.291 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.291 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:50.291 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.291 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.551 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.551 "name": "Existed_Raid", 00:22:50.551 "uuid": "3de0f5e5-b6fc-4f00-a5db-b3501d5b32e2", 00:22:50.551 "strip_size_kb": 64, 00:22:50.551 "state": "configuring", 00:22:50.551 "raid_level": "raid0", 00:22:50.551 "superblock": true, 00:22:50.551 "num_base_bdevs": 4, 00:22:50.551 "num_base_bdevs_discovered": 3, 00:22:50.551 "num_base_bdevs_operational": 4, 00:22:50.551 "base_bdevs_list": [ 00:22:50.551 { 00:22:50.551 "name": null, 00:22:50.551 "uuid": "4209e508-837a-448b-b70a-5d228f6f71a3", 00:22:50.551 "is_configured": false, 00:22:50.551 "data_offset": 2048, 00:22:50.551 "data_size": 63488 00:22:50.551 }, 00:22:50.551 { 00:22:50.551 "name": "BaseBdev2", 00:22:50.551 "uuid": "0be9cb5f-49ea-4c16-b744-8d840b24ad31", 00:22:50.551 "is_configured": true, 00:22:50.551 "data_offset": 2048, 00:22:50.551 "data_size": 63488 00:22:50.551 }, 00:22:50.551 { 00:22:50.551 "name": "BaseBdev3", 00:22:50.551 "uuid": "aec187f2-3ca0-4448-8f87-0db71a7576ff", 00:22:50.551 "is_configured": true, 00:22:50.551 "data_offset": 2048, 00:22:50.551 "data_size": 63488 00:22:50.551 }, 00:22:50.551 { 00:22:50.551 "name": "BaseBdev4", 00:22:50.551 "uuid": "5bfa67ed-6781-4dc0-a817-1da4cf86f76d", 00:22:50.551 "is_configured": true, 00:22:50.551 "data_offset": 2048, 00:22:50.551 "data_size": 63488 00:22:50.551 } 00:22:50.551 ] 00:22:50.551 }' 00:22:50.551 11:04:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.551 11:04:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:51.121 11:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.121 11:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:51.380 11:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:51.380 11:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:51.380 11:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.639 11:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4209e508-837a-448b-b70a-5d228f6f71a3 00:22:51.899 [2024-07-25 11:04:58.777428] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:51.899 [2024-07-25 11:04:58.777674] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:51.899 [2024-07-25 11:04:58.777694] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:51.899 [2024-07-25 11:04:58.778048] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:22:51.899 [2024-07-25 11:04:58.778278] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:51.899 [2024-07-25 11:04:58.778296] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:22:51.899 NewBaseBdev 00:22:51.899 [2024-07-25 11:04:58.778458] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.899 11:04:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:51.899 11:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:22:51.899 11:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:22:51.899 11:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:22:51.899 11:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:22:51.899 11:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:22:51.899 11:04:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:51.899 11:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:52.159 [ 00:22:52.159 { 00:22:52.159 "name": "NewBaseBdev", 00:22:52.159 "aliases": [ 00:22:52.159 "4209e508-837a-448b-b70a-5d228f6f71a3" 00:22:52.159 ], 00:22:52.159 "product_name": "Malloc disk", 00:22:52.159 "block_size": 512, 00:22:52.159 "num_blocks": 65536, 00:22:52.159 "uuid": "4209e508-837a-448b-b70a-5d228f6f71a3", 00:22:52.159 "assigned_rate_limits": { 00:22:52.159 "rw_ios_per_sec": 0, 00:22:52.159 "rw_mbytes_per_sec": 0, 00:22:52.159 "r_mbytes_per_sec": 0, 00:22:52.159 "w_mbytes_per_sec": 0 00:22:52.159 }, 00:22:52.159 "claimed": true, 00:22:52.159 "claim_type": "exclusive_write", 00:22:52.159 "zoned": false, 00:22:52.159 "supported_io_types": { 00:22:52.159 "read": true, 00:22:52.159 "write": true, 00:22:52.159 "unmap": true, 00:22:52.159 "flush": true, 00:22:52.159 "reset": true, 00:22:52.159 "nvme_admin": false, 00:22:52.159 "nvme_io": false, 00:22:52.159 "nvme_io_md": false, 00:22:52.159 "write_zeroes": true, 00:22:52.159 "zcopy": true, 00:22:52.159 "get_zone_info": false, 00:22:52.159 "zone_management": false, 00:22:52.159 "zone_append": false, 00:22:52.159 "compare": false, 00:22:52.159 "compare_and_write": false, 00:22:52.159 "abort": true, 00:22:52.159 "seek_hole": false, 00:22:52.159 "seek_data": false, 00:22:52.159 "copy": true, 00:22:52.159 "nvme_iov_md": false 00:22:52.159 }, 00:22:52.159 "memory_domains": [ 00:22:52.159 { 00:22:52.159 "dma_device_id": "system", 00:22:52.159 "dma_device_type": 1 00:22:52.159 }, 00:22:52.159 { 00:22:52.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.159 "dma_device_type": 2 00:22:52.159 } 00:22:52.159 ], 00:22:52.159 "driver_specific": {} 00:22:52.159 } 00:22:52.159 ] 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.159 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:52.423 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:52.423 "name": "Existed_Raid", 00:22:52.423 "uuid": "3de0f5e5-b6fc-4f00-a5db-b3501d5b32e2", 00:22:52.423 "strip_size_kb": 64, 00:22:52.423 "state": "online", 00:22:52.423 "raid_level": "raid0", 00:22:52.423 "superblock": true, 00:22:52.423 "num_base_bdevs": 4, 00:22:52.423 "num_base_bdevs_discovered": 4, 00:22:52.423 "num_base_bdevs_operational": 4, 00:22:52.423 "base_bdevs_list": [ 00:22:52.423 { 00:22:52.423 "name": "NewBaseBdev", 00:22:52.423 "uuid": "4209e508-837a-448b-b70a-5d228f6f71a3", 00:22:52.423 "is_configured": true, 00:22:52.423 "data_offset": 2048, 00:22:52.423 "data_size": 63488 00:22:52.423 }, 00:22:52.423 { 00:22:52.423 "name": "BaseBdev2", 00:22:52.423 "uuid": "0be9cb5f-49ea-4c16-b744-8d840b24ad31", 00:22:52.423 "is_configured": true, 00:22:52.423 "data_offset": 2048, 00:22:52.423 "data_size": 63488 00:22:52.423 }, 00:22:52.423 { 00:22:52.423 "name": "BaseBdev3", 00:22:52.423 "uuid": "aec187f2-3ca0-4448-8f87-0db71a7576ff", 00:22:52.423 "is_configured": true, 00:22:52.423 "data_offset": 2048, 00:22:52.423 "data_size": 63488 00:22:52.423 }, 00:22:52.423 { 00:22:52.423 "name": "BaseBdev4", 00:22:52.423 "uuid": "5bfa67ed-6781-4dc0-a817-1da4cf86f76d", 00:22:52.423 "is_configured": true, 00:22:52.423 "data_offset": 2048, 00:22:52.423 "data_size": 63488 00:22:52.423 } 00:22:52.423 ] 00:22:52.423 }' 00:22:52.423 11:04:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:52.423 11:04:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:52.994 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:52.995 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:52.995 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:52.995 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:52.995 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:52.995 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:52.995 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:52.995 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:53.266 [2024-07-25 11:05:00.273963] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:53.266 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:53.266 "name": "Existed_Raid", 00:22:53.266 "aliases": [ 00:22:53.266 "3de0f5e5-b6fc-4f00-a5db-b3501d5b32e2" 00:22:53.266 ], 00:22:53.266 "product_name": "Raid Volume", 00:22:53.266 "block_size": 512, 00:22:53.266 "num_blocks": 253952, 00:22:53.266 "uuid": "3de0f5e5-b6fc-4f00-a5db-b3501d5b32e2", 00:22:53.266 "assigned_rate_limits": { 00:22:53.266 "rw_ios_per_sec": 0, 00:22:53.266 "rw_mbytes_per_sec": 0, 00:22:53.266 "r_mbytes_per_sec": 0, 00:22:53.266 "w_mbytes_per_sec": 0 00:22:53.266 }, 00:22:53.266 "claimed": false, 00:22:53.266 "zoned": false, 00:22:53.266 "supported_io_types": { 00:22:53.266 "read": true, 00:22:53.266 "write": true, 00:22:53.266 "unmap": true, 00:22:53.266 "flush": true, 00:22:53.266 "reset": true, 00:22:53.266 "nvme_admin": false, 00:22:53.266 "nvme_io": false, 00:22:53.266 "nvme_io_md": false, 00:22:53.266 "write_zeroes": true, 00:22:53.266 "zcopy": false, 00:22:53.266 "get_zone_info": false, 00:22:53.266 "zone_management": false, 00:22:53.266 "zone_append": false, 00:22:53.266 "compare": false, 00:22:53.266 "compare_and_write": false, 00:22:53.266 "abort": false, 00:22:53.266 "seek_hole": false, 00:22:53.266 "seek_data": false, 00:22:53.266 "copy": false, 00:22:53.266 "nvme_iov_md": false 00:22:53.266 }, 00:22:53.266 "memory_domains": [ 00:22:53.266 { 00:22:53.266 "dma_device_id": "system", 00:22:53.266 "dma_device_type": 1 00:22:53.266 }, 00:22:53.266 { 00:22:53.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.266 "dma_device_type": 2 00:22:53.266 }, 00:22:53.266 { 00:22:53.266 "dma_device_id": "system", 00:22:53.266 "dma_device_type": 1 00:22:53.266 }, 00:22:53.267 { 00:22:53.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.267 "dma_device_type": 2 00:22:53.267 }, 00:22:53.267 { 00:22:53.267 "dma_device_id": "system", 00:22:53.267 "dma_device_type": 1 00:22:53.267 }, 00:22:53.267 { 00:22:53.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.267 "dma_device_type": 2 00:22:53.267 }, 00:22:53.267 { 00:22:53.267 "dma_device_id": "system", 00:22:53.267 "dma_device_type": 1 00:22:53.267 }, 00:22:53.267 { 00:22:53.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.267 "dma_device_type": 2 00:22:53.267 } 00:22:53.267 ], 00:22:53.267 "driver_specific": { 00:22:53.267 "raid": { 00:22:53.267 "uuid": "3de0f5e5-b6fc-4f00-a5db-b3501d5b32e2", 00:22:53.267 "strip_size_kb": 64, 00:22:53.267 "state": "online", 00:22:53.267 "raid_level": "raid0", 00:22:53.267 "superblock": true, 00:22:53.267 "num_base_bdevs": 4, 00:22:53.267 "num_base_bdevs_discovered": 4, 00:22:53.267 "num_base_bdevs_operational": 4, 00:22:53.267 "base_bdevs_list": [ 00:22:53.267 { 00:22:53.267 "name": "NewBaseBdev", 00:22:53.267 "uuid": "4209e508-837a-448b-b70a-5d228f6f71a3", 00:22:53.267 "is_configured": true, 00:22:53.267 "data_offset": 2048, 00:22:53.267 "data_size": 63488 00:22:53.267 }, 00:22:53.267 { 00:22:53.267 "name": "BaseBdev2", 00:22:53.267 "uuid": "0be9cb5f-49ea-4c16-b744-8d840b24ad31", 00:22:53.267 "is_configured": true, 00:22:53.267 "data_offset": 2048, 00:22:53.267 "data_size": 63488 00:22:53.267 }, 00:22:53.267 { 00:22:53.267 "name": "BaseBdev3", 00:22:53.267 "uuid": "aec187f2-3ca0-4448-8f87-0db71a7576ff", 00:22:53.267 "is_configured": true, 00:22:53.267 "data_offset": 2048, 00:22:53.267 "data_size": 63488 00:22:53.267 }, 00:22:53.267 { 00:22:53.267 "name": "BaseBdev4", 00:22:53.267 "uuid": "5bfa67ed-6781-4dc0-a817-1da4cf86f76d", 00:22:53.267 "is_configured": true, 00:22:53.267 "data_offset": 2048, 00:22:53.267 "data_size": 63488 00:22:53.267 } 00:22:53.267 ] 00:22:53.267 } 00:22:53.267 } 00:22:53.267 }' 00:22:53.267 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:53.267 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:53.267 BaseBdev2 00:22:53.267 BaseBdev3 00:22:53.267 BaseBdev4' 00:22:53.267 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:53.267 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:53.267 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:53.527 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:53.527 "name": "NewBaseBdev", 00:22:53.527 "aliases": [ 00:22:53.527 "4209e508-837a-448b-b70a-5d228f6f71a3" 00:22:53.527 ], 00:22:53.527 "product_name": "Malloc disk", 00:22:53.527 "block_size": 512, 00:22:53.527 "num_blocks": 65536, 00:22:53.527 "uuid": "4209e508-837a-448b-b70a-5d228f6f71a3", 00:22:53.527 "assigned_rate_limits": { 00:22:53.527 "rw_ios_per_sec": 0, 00:22:53.527 "rw_mbytes_per_sec": 0, 00:22:53.527 "r_mbytes_per_sec": 0, 00:22:53.527 "w_mbytes_per_sec": 0 00:22:53.527 }, 00:22:53.527 "claimed": true, 00:22:53.527 "claim_type": "exclusive_write", 00:22:53.527 "zoned": false, 00:22:53.527 "supported_io_types": { 00:22:53.527 "read": true, 00:22:53.527 "write": true, 00:22:53.527 "unmap": true, 00:22:53.527 "flush": true, 00:22:53.527 "reset": true, 00:22:53.527 "nvme_admin": false, 00:22:53.527 "nvme_io": false, 00:22:53.527 "nvme_io_md": false, 00:22:53.527 "write_zeroes": true, 00:22:53.527 "zcopy": true, 00:22:53.527 "get_zone_info": false, 00:22:53.527 "zone_management": false, 00:22:53.527 "zone_append": false, 00:22:53.527 "compare": false, 00:22:53.527 "compare_and_write": false, 00:22:53.527 "abort": true, 00:22:53.527 "seek_hole": false, 00:22:53.527 "seek_data": false, 00:22:53.527 "copy": true, 00:22:53.527 "nvme_iov_md": false 00:22:53.527 }, 00:22:53.527 "memory_domains": [ 00:22:53.527 { 00:22:53.527 "dma_device_id": "system", 00:22:53.527 "dma_device_type": 1 00:22:53.527 }, 00:22:53.527 { 00:22:53.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.527 "dma_device_type": 2 00:22:53.528 } 00:22:53.528 ], 00:22:53.528 "driver_specific": {} 00:22:53.528 }' 00:22:53.528 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.528 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.528 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:53.528 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.528 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.786 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:53.786 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.786 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.786 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:53.786 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.786 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.786 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:53.786 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:53.786 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:53.786 11:05:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.045 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.045 "name": "BaseBdev2", 00:22:54.045 "aliases": [ 00:22:54.045 "0be9cb5f-49ea-4c16-b744-8d840b24ad31" 00:22:54.045 ], 00:22:54.045 "product_name": "Malloc disk", 00:22:54.045 "block_size": 512, 00:22:54.045 "num_blocks": 65536, 00:22:54.045 "uuid": "0be9cb5f-49ea-4c16-b744-8d840b24ad31", 00:22:54.045 "assigned_rate_limits": { 00:22:54.045 "rw_ios_per_sec": 0, 00:22:54.045 "rw_mbytes_per_sec": 0, 00:22:54.045 "r_mbytes_per_sec": 0, 00:22:54.045 "w_mbytes_per_sec": 0 00:22:54.045 }, 00:22:54.045 "claimed": true, 00:22:54.045 "claim_type": "exclusive_write", 00:22:54.045 "zoned": false, 00:22:54.045 "supported_io_types": { 00:22:54.045 "read": true, 00:22:54.045 "write": true, 00:22:54.045 "unmap": true, 00:22:54.045 "flush": true, 00:22:54.045 "reset": true, 00:22:54.045 "nvme_admin": false, 00:22:54.045 "nvme_io": false, 00:22:54.045 "nvme_io_md": false, 00:22:54.045 "write_zeroes": true, 00:22:54.045 "zcopy": true, 00:22:54.045 "get_zone_info": false, 00:22:54.045 "zone_management": false, 00:22:54.045 "zone_append": false, 00:22:54.045 "compare": false, 00:22:54.045 "compare_and_write": false, 00:22:54.045 "abort": true, 00:22:54.045 "seek_hole": false, 00:22:54.045 "seek_data": false, 00:22:54.045 "copy": true, 00:22:54.045 "nvme_iov_md": false 00:22:54.045 }, 00:22:54.045 "memory_domains": [ 00:22:54.045 { 00:22:54.045 "dma_device_id": "system", 00:22:54.045 "dma_device_type": 1 00:22:54.045 }, 00:22:54.045 { 00:22:54.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.045 "dma_device_type": 2 00:22:54.045 } 00:22:54.045 ], 00:22:54.045 "driver_specific": {} 00:22:54.045 }' 00:22:54.045 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.045 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:54.305 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.605 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.605 "name": "BaseBdev3", 00:22:54.605 "aliases": [ 00:22:54.605 "aec187f2-3ca0-4448-8f87-0db71a7576ff" 00:22:54.605 ], 00:22:54.605 "product_name": "Malloc disk", 00:22:54.605 "block_size": 512, 00:22:54.605 "num_blocks": 65536, 00:22:54.605 "uuid": "aec187f2-3ca0-4448-8f87-0db71a7576ff", 00:22:54.605 "assigned_rate_limits": { 00:22:54.605 "rw_ios_per_sec": 0, 00:22:54.605 "rw_mbytes_per_sec": 0, 00:22:54.605 "r_mbytes_per_sec": 0, 00:22:54.605 "w_mbytes_per_sec": 0 00:22:54.605 }, 00:22:54.605 "claimed": true, 00:22:54.605 "claim_type": "exclusive_write", 00:22:54.605 "zoned": false, 00:22:54.605 "supported_io_types": { 00:22:54.605 "read": true, 00:22:54.605 "write": true, 00:22:54.605 "unmap": true, 00:22:54.605 "flush": true, 00:22:54.605 "reset": true, 00:22:54.605 "nvme_admin": false, 00:22:54.605 "nvme_io": false, 00:22:54.605 "nvme_io_md": false, 00:22:54.605 "write_zeroes": true, 00:22:54.605 "zcopy": true, 00:22:54.605 "get_zone_info": false, 00:22:54.605 "zone_management": false, 00:22:54.605 "zone_append": false, 00:22:54.605 "compare": false, 00:22:54.605 "compare_and_write": false, 00:22:54.605 "abort": true, 00:22:54.605 "seek_hole": false, 00:22:54.605 "seek_data": false, 00:22:54.605 "copy": true, 00:22:54.605 "nvme_iov_md": false 00:22:54.605 }, 00:22:54.605 "memory_domains": [ 00:22:54.605 { 00:22:54.605 "dma_device_id": "system", 00:22:54.605 "dma_device_type": 1 00:22:54.605 }, 00:22:54.605 { 00:22:54.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.605 "dma_device_type": 2 00:22:54.605 } 00:22:54.605 ], 00:22:54.605 "driver_specific": {} 00:22:54.605 }' 00:22:54.605 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.605 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.863 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:54.863 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.863 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:54.863 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:54.863 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.863 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:54.863 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:54.863 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:54.863 11:05:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.122 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:55.122 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:55.122 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:55.122 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:55.380 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:55.380 "name": "BaseBdev4", 00:22:55.380 "aliases": [ 00:22:55.380 "5bfa67ed-6781-4dc0-a817-1da4cf86f76d" 00:22:55.380 ], 00:22:55.380 "product_name": "Malloc disk", 00:22:55.380 "block_size": 512, 00:22:55.380 "num_blocks": 65536, 00:22:55.380 "uuid": "5bfa67ed-6781-4dc0-a817-1da4cf86f76d", 00:22:55.380 "assigned_rate_limits": { 00:22:55.380 "rw_ios_per_sec": 0, 00:22:55.380 "rw_mbytes_per_sec": 0, 00:22:55.380 "r_mbytes_per_sec": 0, 00:22:55.380 "w_mbytes_per_sec": 0 00:22:55.380 }, 00:22:55.380 "claimed": true, 00:22:55.380 "claim_type": "exclusive_write", 00:22:55.380 "zoned": false, 00:22:55.380 "supported_io_types": { 00:22:55.380 "read": true, 00:22:55.380 "write": true, 00:22:55.380 "unmap": true, 00:22:55.380 "flush": true, 00:22:55.380 "reset": true, 00:22:55.380 "nvme_admin": false, 00:22:55.380 "nvme_io": false, 00:22:55.380 "nvme_io_md": false, 00:22:55.380 "write_zeroes": true, 00:22:55.380 "zcopy": true, 00:22:55.380 "get_zone_info": false, 00:22:55.380 "zone_management": false, 00:22:55.380 "zone_append": false, 00:22:55.380 "compare": false, 00:22:55.380 "compare_and_write": false, 00:22:55.380 "abort": true, 00:22:55.380 "seek_hole": false, 00:22:55.380 "seek_data": false, 00:22:55.380 "copy": true, 00:22:55.380 "nvme_iov_md": false 00:22:55.380 }, 00:22:55.380 "memory_domains": [ 00:22:55.380 { 00:22:55.380 "dma_device_id": "system", 00:22:55.380 "dma_device_type": 1 00:22:55.380 }, 00:22:55.380 { 00:22:55.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.380 "dma_device_type": 2 00:22:55.380 } 00:22:55.380 ], 00:22:55.380 "driver_specific": {} 00:22:55.380 }' 00:22:55.380 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.380 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.380 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:55.380 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.380 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.380 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:55.380 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.380 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.638 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:55.638 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.638 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.638 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:55.638 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:55.897 [2024-07-25 11:05:02.812446] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:55.897 [2024-07-25 11:05:02.812481] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:55.897 [2024-07-25 11:05:02.812562] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:55.897 [2024-07-25 11:05:02.812645] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:55.897 [2024-07-25 11:05:02.812661] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:22:55.897 11:05:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3641290 00:22:55.897 11:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3641290 ']' 00:22:55.897 11:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3641290 00:22:55.897 11:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:22:55.897 11:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:55.897 11:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3641290 00:22:55.897 11:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:55.897 11:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:55.897 11:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3641290' 00:22:55.897 killing process with pid 3641290 00:22:55.897 11:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3641290 00:22:55.897 [2024-07-25 11:05:02.887505] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:55.897 11:05:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3641290 00:22:56.464 [2024-07-25 11:05:03.346357] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:58.364 11:05:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:58.364 00:22:58.364 real 0m33.194s 00:22:58.364 user 0m58.260s 00:22:58.364 sys 0m5.501s 00:22:58.364 11:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:58.364 11:05:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:58.364 ************************************ 00:22:58.364 END TEST raid_state_function_test_sb 00:22:58.364 ************************************ 00:22:58.364 11:05:05 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:22:58.364 11:05:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:58.364 11:05:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:58.364 11:05:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:58.364 ************************************ 00:22:58.364 START TEST raid_superblock_test 00:22:58.364 ************************************ 00:22:58.364 11:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:22:58.364 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid0 00:22:58.364 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:22:58.364 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:22:58.364 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:22:58.364 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid0 '!=' raid1 ']' 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3647449 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3647449 /var/tmp/spdk-raid.sock 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3647449 ']' 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:58.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.365 11:05:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.365 [2024-07-25 11:05:05.264969] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:58.365 [2024-07-25 11:05:05.265090] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647449 ] 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:01.0 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:01.1 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:01.2 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:01.3 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:01.4 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:01.5 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:01.6 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:01.7 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:02.0 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:02.1 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:02.2 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:02.3 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:02.4 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:02.5 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:02.6 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3d:02.7 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:01.0 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:01.1 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:01.2 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:01.3 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:01.4 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:01.5 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:01.6 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:01.7 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:02.0 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:02.1 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:02.2 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:02.3 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:02.4 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:02.5 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:02.6 cannot be used 00:22:58.365 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:22:58.365 EAL: Requested device 0000:3f:02.7 cannot be used 00:22:58.624 [2024-07-25 11:05:05.491656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.883 [2024-07-25 11:05:05.768799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.141 [2024-07-25 11:05:06.116580] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:59.141 [2024-07-25 11:05:06.116611] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:59.400 11:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.400 11:05:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:22:59.400 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:22:59.400 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:59.400 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:22:59.400 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:22:59.400 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:59.400 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:59.400 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:59.400 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:59.400 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:59.659 malloc1 00:22:59.659 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:59.917 [2024-07-25 11:05:06.795160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:59.917 [2024-07-25 11:05:06.795217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:59.917 [2024-07-25 11:05:06.795248] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:22:59.917 [2024-07-25 11:05:06.795264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:59.917 [2024-07-25 11:05:06.798018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:59.917 [2024-07-25 11:05:06.798051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:59.917 pt1 00:22:59.917 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:22:59.917 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:22:59.917 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:22:59.917 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:22:59.917 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:59.917 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:59.918 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:22:59.918 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:59.918 11:05:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:00.176 malloc2 00:23:00.176 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:00.435 [2024-07-25 11:05:07.298149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:00.435 [2024-07-25 11:05:07.298207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.435 [2024-07-25 11:05:07.298234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:23:00.435 [2024-07-25 11:05:07.298249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.435 [2024-07-25 11:05:07.301004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.435 [2024-07-25 11:05:07.301043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:00.435 pt2 00:23:00.435 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:00.435 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:00.435 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:23:00.435 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:23:00.435 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:00.435 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:00.435 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:00.435 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:00.435 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:00.694 malloc3 00:23:00.694 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:00.694 [2024-07-25 11:05:07.803159] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:00.694 [2024-07-25 11:05:07.803219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:00.694 [2024-07-25 11:05:07.803250] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:23:00.694 [2024-07-25 11:05:07.803266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:00.694 [2024-07-25 11:05:07.806008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:00.694 [2024-07-25 11:05:07.806041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:00.694 pt3 00:23:00.953 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:00.953 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:00.953 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:23:00.953 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:23:00.953 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:00.953 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:00.953 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:23:00.953 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:00.953 11:05:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:01.212 malloc4 00:23:01.212 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:01.212 [2024-07-25 11:05:08.316672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:01.212 [2024-07-25 11:05:08.316737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:01.212 [2024-07-25 11:05:08.316767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:23:01.212 [2024-07-25 11:05:08.316783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:01.212 [2024-07-25 11:05:08.319566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:01.212 [2024-07-25 11:05:08.319601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:01.212 pt4 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:01.471 [2024-07-25 11:05:08.545372] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:01.471 [2024-07-25 11:05:08.547758] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:01.471 [2024-07-25 11:05:08.547849] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:01.471 [2024-07-25 11:05:08.547905] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:01.471 [2024-07-25 11:05:08.548125] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:23:01.471 [2024-07-25 11:05:08.548150] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:01.471 [2024-07-25 11:05:08.548514] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:23:01.471 [2024-07-25 11:05:08.548758] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:23:01.471 [2024-07-25 11:05:08.548777] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:23:01.471 [2024-07-25 11:05:08.548990] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.471 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.729 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:01.729 "name": "raid_bdev1", 00:23:01.729 "uuid": "a3ccb77a-cd8c-4751-98c9-2d931a5e6872", 00:23:01.729 "strip_size_kb": 64, 00:23:01.729 "state": "online", 00:23:01.729 "raid_level": "raid0", 00:23:01.729 "superblock": true, 00:23:01.729 "num_base_bdevs": 4, 00:23:01.729 "num_base_bdevs_discovered": 4, 00:23:01.729 "num_base_bdevs_operational": 4, 00:23:01.729 "base_bdevs_list": [ 00:23:01.729 { 00:23:01.729 "name": "pt1", 00:23:01.729 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:01.729 "is_configured": true, 00:23:01.729 "data_offset": 2048, 00:23:01.729 "data_size": 63488 00:23:01.729 }, 00:23:01.729 { 00:23:01.729 "name": "pt2", 00:23:01.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:01.729 "is_configured": true, 00:23:01.729 "data_offset": 2048, 00:23:01.729 "data_size": 63488 00:23:01.729 }, 00:23:01.729 { 00:23:01.729 "name": "pt3", 00:23:01.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:01.729 "is_configured": true, 00:23:01.729 "data_offset": 2048, 00:23:01.729 "data_size": 63488 00:23:01.729 }, 00:23:01.729 { 00:23:01.729 "name": "pt4", 00:23:01.729 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:01.729 "is_configured": true, 00:23:01.729 "data_offset": 2048, 00:23:01.729 "data_size": 63488 00:23:01.729 } 00:23:01.729 ] 00:23:01.729 }' 00:23:01.729 11:05:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:01.729 11:05:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:02.295 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:23:02.295 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:02.295 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:02.295 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:02.295 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:02.295 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:02.295 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:02.295 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:02.554 [2024-07-25 11:05:09.572486] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:02.554 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:02.554 "name": "raid_bdev1", 00:23:02.554 "aliases": [ 00:23:02.554 "a3ccb77a-cd8c-4751-98c9-2d931a5e6872" 00:23:02.554 ], 00:23:02.554 "product_name": "Raid Volume", 00:23:02.554 "block_size": 512, 00:23:02.554 "num_blocks": 253952, 00:23:02.554 "uuid": "a3ccb77a-cd8c-4751-98c9-2d931a5e6872", 00:23:02.554 "assigned_rate_limits": { 00:23:02.554 "rw_ios_per_sec": 0, 00:23:02.554 "rw_mbytes_per_sec": 0, 00:23:02.554 "r_mbytes_per_sec": 0, 00:23:02.554 "w_mbytes_per_sec": 0 00:23:02.554 }, 00:23:02.554 "claimed": false, 00:23:02.554 "zoned": false, 00:23:02.554 "supported_io_types": { 00:23:02.554 "read": true, 00:23:02.554 "write": true, 00:23:02.554 "unmap": true, 00:23:02.554 "flush": true, 00:23:02.554 "reset": true, 00:23:02.554 "nvme_admin": false, 00:23:02.554 "nvme_io": false, 00:23:02.554 "nvme_io_md": false, 00:23:02.554 "write_zeroes": true, 00:23:02.554 "zcopy": false, 00:23:02.554 "get_zone_info": false, 00:23:02.554 "zone_management": false, 00:23:02.554 "zone_append": false, 00:23:02.554 "compare": false, 00:23:02.554 "compare_and_write": false, 00:23:02.554 "abort": false, 00:23:02.554 "seek_hole": false, 00:23:02.554 "seek_data": false, 00:23:02.554 "copy": false, 00:23:02.554 "nvme_iov_md": false 00:23:02.554 }, 00:23:02.554 "memory_domains": [ 00:23:02.554 { 00:23:02.554 "dma_device_id": "system", 00:23:02.554 "dma_device_type": 1 00:23:02.554 }, 00:23:02.554 { 00:23:02.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.554 "dma_device_type": 2 00:23:02.554 }, 00:23:02.554 { 00:23:02.554 "dma_device_id": "system", 00:23:02.554 "dma_device_type": 1 00:23:02.554 }, 00:23:02.554 { 00:23:02.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.554 "dma_device_type": 2 00:23:02.554 }, 00:23:02.554 { 00:23:02.554 "dma_device_id": "system", 00:23:02.554 "dma_device_type": 1 00:23:02.554 }, 00:23:02.554 { 00:23:02.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.554 "dma_device_type": 2 00:23:02.554 }, 00:23:02.554 { 00:23:02.554 "dma_device_id": "system", 00:23:02.554 "dma_device_type": 1 00:23:02.554 }, 00:23:02.554 { 00:23:02.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.554 "dma_device_type": 2 00:23:02.554 } 00:23:02.554 ], 00:23:02.554 "driver_specific": { 00:23:02.554 "raid": { 00:23:02.554 "uuid": "a3ccb77a-cd8c-4751-98c9-2d931a5e6872", 00:23:02.554 "strip_size_kb": 64, 00:23:02.554 "state": "online", 00:23:02.554 "raid_level": "raid0", 00:23:02.554 "superblock": true, 00:23:02.554 "num_base_bdevs": 4, 00:23:02.554 "num_base_bdevs_discovered": 4, 00:23:02.554 "num_base_bdevs_operational": 4, 00:23:02.554 "base_bdevs_list": [ 00:23:02.554 { 00:23:02.554 "name": "pt1", 00:23:02.554 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:02.554 "is_configured": true, 00:23:02.554 "data_offset": 2048, 00:23:02.554 "data_size": 63488 00:23:02.554 }, 00:23:02.554 { 00:23:02.554 "name": "pt2", 00:23:02.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:02.554 "is_configured": true, 00:23:02.554 "data_offset": 2048, 00:23:02.554 "data_size": 63488 00:23:02.554 }, 00:23:02.554 { 00:23:02.554 "name": "pt3", 00:23:02.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:02.554 "is_configured": true, 00:23:02.554 "data_offset": 2048, 00:23:02.554 "data_size": 63488 00:23:02.554 }, 00:23:02.554 { 00:23:02.554 "name": "pt4", 00:23:02.554 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:02.554 "is_configured": true, 00:23:02.554 "data_offset": 2048, 00:23:02.554 "data_size": 63488 00:23:02.554 } 00:23:02.554 ] 00:23:02.554 } 00:23:02.554 } 00:23:02.554 }' 00:23:02.554 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:02.554 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:02.554 pt2 00:23:02.554 pt3 00:23:02.554 pt4' 00:23:02.554 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:02.554 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:02.554 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:02.813 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:02.813 "name": "pt1", 00:23:02.813 "aliases": [ 00:23:02.813 "00000000-0000-0000-0000-000000000001" 00:23:02.813 ], 00:23:02.813 "product_name": "passthru", 00:23:02.813 "block_size": 512, 00:23:02.813 "num_blocks": 65536, 00:23:02.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:02.813 "assigned_rate_limits": { 00:23:02.813 "rw_ios_per_sec": 0, 00:23:02.813 "rw_mbytes_per_sec": 0, 00:23:02.813 "r_mbytes_per_sec": 0, 00:23:02.813 "w_mbytes_per_sec": 0 00:23:02.813 }, 00:23:02.813 "claimed": true, 00:23:02.813 "claim_type": "exclusive_write", 00:23:02.813 "zoned": false, 00:23:02.813 "supported_io_types": { 00:23:02.813 "read": true, 00:23:02.813 "write": true, 00:23:02.813 "unmap": true, 00:23:02.813 "flush": true, 00:23:02.813 "reset": true, 00:23:02.813 "nvme_admin": false, 00:23:02.813 "nvme_io": false, 00:23:02.813 "nvme_io_md": false, 00:23:02.813 "write_zeroes": true, 00:23:02.813 "zcopy": true, 00:23:02.813 "get_zone_info": false, 00:23:02.813 "zone_management": false, 00:23:02.813 "zone_append": false, 00:23:02.813 "compare": false, 00:23:02.813 "compare_and_write": false, 00:23:02.813 "abort": true, 00:23:02.813 "seek_hole": false, 00:23:02.813 "seek_data": false, 00:23:02.813 "copy": true, 00:23:02.813 "nvme_iov_md": false 00:23:02.813 }, 00:23:02.813 "memory_domains": [ 00:23:02.813 { 00:23:02.813 "dma_device_id": "system", 00:23:02.813 "dma_device_type": 1 00:23:02.813 }, 00:23:02.813 { 00:23:02.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.813 "dma_device_type": 2 00:23:02.813 } 00:23:02.813 ], 00:23:02.813 "driver_specific": { 00:23:02.813 "passthru": { 00:23:02.813 "name": "pt1", 00:23:02.813 "base_bdev_name": "malloc1" 00:23:02.813 } 00:23:02.813 } 00:23:02.813 }' 00:23:02.813 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.813 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.071 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:03.071 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.071 11:05:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.071 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.071 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.071 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.071 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.071 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.071 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.328 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.328 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:03.328 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:03.328 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:03.328 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:03.328 "name": "pt2", 00:23:03.328 "aliases": [ 00:23:03.328 "00000000-0000-0000-0000-000000000002" 00:23:03.328 ], 00:23:03.328 "product_name": "passthru", 00:23:03.328 "block_size": 512, 00:23:03.328 "num_blocks": 65536, 00:23:03.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:03.328 "assigned_rate_limits": { 00:23:03.328 "rw_ios_per_sec": 0, 00:23:03.328 "rw_mbytes_per_sec": 0, 00:23:03.328 "r_mbytes_per_sec": 0, 00:23:03.328 "w_mbytes_per_sec": 0 00:23:03.328 }, 00:23:03.328 "claimed": true, 00:23:03.328 "claim_type": "exclusive_write", 00:23:03.328 "zoned": false, 00:23:03.328 "supported_io_types": { 00:23:03.328 "read": true, 00:23:03.328 "write": true, 00:23:03.328 "unmap": true, 00:23:03.328 "flush": true, 00:23:03.328 "reset": true, 00:23:03.328 "nvme_admin": false, 00:23:03.328 "nvme_io": false, 00:23:03.328 "nvme_io_md": false, 00:23:03.328 "write_zeroes": true, 00:23:03.328 "zcopy": true, 00:23:03.328 "get_zone_info": false, 00:23:03.328 "zone_management": false, 00:23:03.328 "zone_append": false, 00:23:03.328 "compare": false, 00:23:03.328 "compare_and_write": false, 00:23:03.328 "abort": true, 00:23:03.328 "seek_hole": false, 00:23:03.328 "seek_data": false, 00:23:03.328 "copy": true, 00:23:03.328 "nvme_iov_md": false 00:23:03.328 }, 00:23:03.328 "memory_domains": [ 00:23:03.328 { 00:23:03.328 "dma_device_id": "system", 00:23:03.328 "dma_device_type": 1 00:23:03.328 }, 00:23:03.328 { 00:23:03.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:03.328 "dma_device_type": 2 00:23:03.328 } 00:23:03.328 ], 00:23:03.328 "driver_specific": { 00:23:03.328 "passthru": { 00:23:03.328 "name": "pt2", 00:23:03.328 "base_bdev_name": "malloc2" 00:23:03.328 } 00:23:03.328 } 00:23:03.328 }' 00:23:03.328 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.585 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:03.585 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:03.585 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.585 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.585 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.585 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.585 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.585 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.585 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.843 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.843 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.843 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:03.843 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:03.843 11:05:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:04.101 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:04.101 "name": "pt3", 00:23:04.101 "aliases": [ 00:23:04.101 "00000000-0000-0000-0000-000000000003" 00:23:04.101 ], 00:23:04.101 "product_name": "passthru", 00:23:04.101 "block_size": 512, 00:23:04.101 "num_blocks": 65536, 00:23:04.101 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:04.101 "assigned_rate_limits": { 00:23:04.101 "rw_ios_per_sec": 0, 00:23:04.101 "rw_mbytes_per_sec": 0, 00:23:04.101 "r_mbytes_per_sec": 0, 00:23:04.101 "w_mbytes_per_sec": 0 00:23:04.101 }, 00:23:04.101 "claimed": true, 00:23:04.101 "claim_type": "exclusive_write", 00:23:04.101 "zoned": false, 00:23:04.101 "supported_io_types": { 00:23:04.101 "read": true, 00:23:04.101 "write": true, 00:23:04.101 "unmap": true, 00:23:04.101 "flush": true, 00:23:04.101 "reset": true, 00:23:04.101 "nvme_admin": false, 00:23:04.101 "nvme_io": false, 00:23:04.101 "nvme_io_md": false, 00:23:04.101 "write_zeroes": true, 00:23:04.101 "zcopy": true, 00:23:04.101 "get_zone_info": false, 00:23:04.101 "zone_management": false, 00:23:04.101 "zone_append": false, 00:23:04.101 "compare": false, 00:23:04.101 "compare_and_write": false, 00:23:04.101 "abort": true, 00:23:04.101 "seek_hole": false, 00:23:04.101 "seek_data": false, 00:23:04.101 "copy": true, 00:23:04.101 "nvme_iov_md": false 00:23:04.101 }, 00:23:04.101 "memory_domains": [ 00:23:04.101 { 00:23:04.101 "dma_device_id": "system", 00:23:04.101 "dma_device_type": 1 00:23:04.101 }, 00:23:04.101 { 00:23:04.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.101 "dma_device_type": 2 00:23:04.101 } 00:23:04.101 ], 00:23:04.101 "driver_specific": { 00:23:04.101 "passthru": { 00:23:04.101 "name": "pt3", 00:23:04.101 "base_bdev_name": "malloc3" 00:23:04.101 } 00:23:04.101 } 00:23:04.101 }' 00:23:04.101 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:04.101 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:04.101 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:04.101 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:04.101 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:04.101 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:04.101 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:04.101 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:04.360 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:04.360 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:04.360 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:04.360 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:04.360 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:04.360 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:04.360 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:04.619 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:04.619 "name": "pt4", 00:23:04.619 "aliases": [ 00:23:04.619 "00000000-0000-0000-0000-000000000004" 00:23:04.619 ], 00:23:04.619 "product_name": "passthru", 00:23:04.619 "block_size": 512, 00:23:04.619 "num_blocks": 65536, 00:23:04.619 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:04.619 "assigned_rate_limits": { 00:23:04.619 "rw_ios_per_sec": 0, 00:23:04.619 "rw_mbytes_per_sec": 0, 00:23:04.619 "r_mbytes_per_sec": 0, 00:23:04.619 "w_mbytes_per_sec": 0 00:23:04.619 }, 00:23:04.619 "claimed": true, 00:23:04.619 "claim_type": "exclusive_write", 00:23:04.619 "zoned": false, 00:23:04.619 "supported_io_types": { 00:23:04.619 "read": true, 00:23:04.619 "write": true, 00:23:04.619 "unmap": true, 00:23:04.619 "flush": true, 00:23:04.619 "reset": true, 00:23:04.619 "nvme_admin": false, 00:23:04.619 "nvme_io": false, 00:23:04.619 "nvme_io_md": false, 00:23:04.619 "write_zeroes": true, 00:23:04.619 "zcopy": true, 00:23:04.619 "get_zone_info": false, 00:23:04.619 "zone_management": false, 00:23:04.619 "zone_append": false, 00:23:04.619 "compare": false, 00:23:04.619 "compare_and_write": false, 00:23:04.619 "abort": true, 00:23:04.619 "seek_hole": false, 00:23:04.619 "seek_data": false, 00:23:04.619 "copy": true, 00:23:04.619 "nvme_iov_md": false 00:23:04.619 }, 00:23:04.619 "memory_domains": [ 00:23:04.619 { 00:23:04.619 "dma_device_id": "system", 00:23:04.619 "dma_device_type": 1 00:23:04.619 }, 00:23:04.619 { 00:23:04.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:04.619 "dma_device_type": 2 00:23:04.619 } 00:23:04.619 ], 00:23:04.619 "driver_specific": { 00:23:04.619 "passthru": { 00:23:04.619 "name": "pt4", 00:23:04.619 "base_bdev_name": "malloc4" 00:23:04.619 } 00:23:04.619 } 00:23:04.619 }' 00:23:04.619 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:04.619 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:04.619 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:04.619 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:04.619 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:04.619 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:04.619 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:04.877 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:04.878 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:04.878 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:04.878 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:04.878 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:04.878 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:04.878 11:05:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:23:05.137 [2024-07-25 11:05:12.103474] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:05.137 11:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=a3ccb77a-cd8c-4751-98c9-2d931a5e6872 00:23:05.137 11:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z a3ccb77a-cd8c-4751-98c9-2d931a5e6872 ']' 00:23:05.137 11:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:05.396 [2024-07-25 11:05:12.331782] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:05.396 [2024-07-25 11:05:12.331811] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:05.396 [2024-07-25 11:05:12.331899] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:05.396 [2024-07-25 11:05:12.331981] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:05.396 [2024-07-25 11:05:12.332000] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:05.396 11:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.396 11:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:23:05.655 11:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:23:05.655 11:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:23:05.655 11:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:05.655 11:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:05.913 11:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:05.913 11:05:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:05.913 11:05:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:05.913 11:05:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:06.172 11:05:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:23:06.172 11:05:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:06.432 11:05:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:06.432 11:05:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:23:06.690 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:06.948 [2024-07-25 11:05:13.887904] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:06.948 [2024-07-25 11:05:13.890228] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:06.948 [2024-07-25 11:05:13.890285] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:06.948 [2024-07-25 11:05:13.890331] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:06.948 [2024-07-25 11:05:13.890384] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:06.948 [2024-07-25 11:05:13.890439] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:06.948 [2024-07-25 11:05:13.890468] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:06.948 [2024-07-25 11:05:13.890502] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:06.948 [2024-07-25 11:05:13.890524] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:06.948 [2024-07-25 11:05:13.890542] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:23:06.948 request: 00:23:06.948 { 00:23:06.948 "name": "raid_bdev1", 00:23:06.948 "raid_level": "raid0", 00:23:06.948 "base_bdevs": [ 00:23:06.948 "malloc1", 00:23:06.948 "malloc2", 00:23:06.948 "malloc3", 00:23:06.948 "malloc4" 00:23:06.948 ], 00:23:06.948 "strip_size_kb": 64, 00:23:06.948 "superblock": false, 00:23:06.948 "method": "bdev_raid_create", 00:23:06.948 "req_id": 1 00:23:06.948 } 00:23:06.948 Got JSON-RPC error response 00:23:06.948 response: 00:23:06.948 { 00:23:06.948 "code": -17, 00:23:06.948 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:06.948 } 00:23:06.948 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:23:06.948 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:06.948 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:06.948 11:05:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:06.948 11:05:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.948 11:05:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:23:07.207 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:23:07.207 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:23:07.207 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:07.465 [2024-07-25 11:05:14.345059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:07.465 [2024-07-25 11:05:14.345129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.465 [2024-07-25 11:05:14.345161] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:23:07.465 [2024-07-25 11:05:14.345180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.465 [2024-07-25 11:05:14.347934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.465 [2024-07-25 11:05:14.347971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:07.465 [2024-07-25 11:05:14.348059] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:07.465 [2024-07-25 11:05:14.348128] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:07.465 pt1 00:23:07.465 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:23:07.465 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:07.465 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:07.465 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:07.465 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:07.465 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:07.465 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.465 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.465 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.465 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.465 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.465 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.736 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.736 "name": "raid_bdev1", 00:23:07.736 "uuid": "a3ccb77a-cd8c-4751-98c9-2d931a5e6872", 00:23:07.736 "strip_size_kb": 64, 00:23:07.736 "state": "configuring", 00:23:07.736 "raid_level": "raid0", 00:23:07.736 "superblock": true, 00:23:07.736 "num_base_bdevs": 4, 00:23:07.736 "num_base_bdevs_discovered": 1, 00:23:07.736 "num_base_bdevs_operational": 4, 00:23:07.736 "base_bdevs_list": [ 00:23:07.736 { 00:23:07.736 "name": "pt1", 00:23:07.736 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:07.736 "is_configured": true, 00:23:07.736 "data_offset": 2048, 00:23:07.736 "data_size": 63488 00:23:07.736 }, 00:23:07.736 { 00:23:07.736 "name": null, 00:23:07.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:07.736 "is_configured": false, 00:23:07.736 "data_offset": 2048, 00:23:07.736 "data_size": 63488 00:23:07.736 }, 00:23:07.736 { 00:23:07.736 "name": null, 00:23:07.736 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:07.736 "is_configured": false, 00:23:07.736 "data_offset": 2048, 00:23:07.736 "data_size": 63488 00:23:07.736 }, 00:23:07.736 { 00:23:07.736 "name": null, 00:23:07.736 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:07.736 "is_configured": false, 00:23:07.736 "data_offset": 2048, 00:23:07.736 "data_size": 63488 00:23:07.736 } 00:23:07.736 ] 00:23:07.736 }' 00:23:07.736 11:05:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.736 11:05:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.321 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:23:08.321 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:08.321 [2024-07-25 11:05:15.367848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:08.321 [2024-07-25 11:05:15.367914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.321 [2024-07-25 11:05:15.367939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:23:08.321 [2024-07-25 11:05:15.367956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.321 [2024-07-25 11:05:15.368526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.321 [2024-07-25 11:05:15.368556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:08.321 [2024-07-25 11:05:15.368644] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:08.321 [2024-07-25 11:05:15.368677] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:08.321 pt2 00:23:08.321 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:08.579 [2024-07-25 11:05:15.596506] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:08.579 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:23:08.579 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:08.579 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:08.579 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:08.579 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:08.579 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:08.579 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.579 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.579 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.579 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.579 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.579 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.837 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:08.838 "name": "raid_bdev1", 00:23:08.838 "uuid": "a3ccb77a-cd8c-4751-98c9-2d931a5e6872", 00:23:08.838 "strip_size_kb": 64, 00:23:08.838 "state": "configuring", 00:23:08.838 "raid_level": "raid0", 00:23:08.838 "superblock": true, 00:23:08.838 "num_base_bdevs": 4, 00:23:08.838 "num_base_bdevs_discovered": 1, 00:23:08.838 "num_base_bdevs_operational": 4, 00:23:08.838 "base_bdevs_list": [ 00:23:08.838 { 00:23:08.838 "name": "pt1", 00:23:08.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:08.838 "is_configured": true, 00:23:08.838 "data_offset": 2048, 00:23:08.838 "data_size": 63488 00:23:08.838 }, 00:23:08.838 { 00:23:08.838 "name": null, 00:23:08.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:08.838 "is_configured": false, 00:23:08.838 "data_offset": 2048, 00:23:08.838 "data_size": 63488 00:23:08.838 }, 00:23:08.838 { 00:23:08.838 "name": null, 00:23:08.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:08.838 "is_configured": false, 00:23:08.838 "data_offset": 2048, 00:23:08.838 "data_size": 63488 00:23:08.838 }, 00:23:08.838 { 00:23:08.838 "name": null, 00:23:08.838 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:08.838 "is_configured": false, 00:23:08.838 "data_offset": 2048, 00:23:08.838 "data_size": 63488 00:23:08.838 } 00:23:08.838 ] 00:23:08.838 }' 00:23:08.838 11:05:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:08.838 11:05:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.404 11:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:23:09.404 11:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:09.404 11:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:09.663 [2024-07-25 11:05:16.627266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:09.663 [2024-07-25 11:05:16.627331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.663 [2024-07-25 11:05:16.627358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:23:09.663 [2024-07-25 11:05:16.627374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.663 [2024-07-25 11:05:16.627926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.663 [2024-07-25 11:05:16.627952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:09.663 [2024-07-25 11:05:16.628046] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:09.663 [2024-07-25 11:05:16.628072] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:09.663 pt2 00:23:09.663 11:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:09.663 11:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:09.663 11:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:09.921 [2024-07-25 11:05:16.855919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:09.921 [2024-07-25 11:05:16.855978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.921 [2024-07-25 11:05:16.856011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:23:09.921 [2024-07-25 11:05:16.856026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.921 [2024-07-25 11:05:16.856608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.921 [2024-07-25 11:05:16.856634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:09.921 [2024-07-25 11:05:16.856725] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:09.921 [2024-07-25 11:05:16.856752] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:09.921 pt3 00:23:09.921 11:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:09.921 11:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:09.921 11:05:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:10.180 [2024-07-25 11:05:17.080545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:10.180 [2024-07-25 11:05:17.080602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:10.180 [2024-07-25 11:05:17.080629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:23:10.180 [2024-07-25 11:05:17.080644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:10.180 [2024-07-25 11:05:17.081186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:10.180 [2024-07-25 11:05:17.081210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:10.180 [2024-07-25 11:05:17.081306] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:10.180 [2024-07-25 11:05:17.081334] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:10.180 [2024-07-25 11:05:17.081542] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:10.180 [2024-07-25 11:05:17.081556] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:10.180 [2024-07-25 11:05:17.081895] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:23:10.180 [2024-07-25 11:05:17.082113] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:10.180 [2024-07-25 11:05:17.082132] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:23:10.180 [2024-07-25 11:05:17.082318] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:10.180 pt4 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.180 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.439 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:10.439 "name": "raid_bdev1", 00:23:10.439 "uuid": "a3ccb77a-cd8c-4751-98c9-2d931a5e6872", 00:23:10.439 "strip_size_kb": 64, 00:23:10.439 "state": "online", 00:23:10.439 "raid_level": "raid0", 00:23:10.439 "superblock": true, 00:23:10.439 "num_base_bdevs": 4, 00:23:10.439 "num_base_bdevs_discovered": 4, 00:23:10.439 "num_base_bdevs_operational": 4, 00:23:10.439 "base_bdevs_list": [ 00:23:10.439 { 00:23:10.439 "name": "pt1", 00:23:10.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:10.439 "is_configured": true, 00:23:10.439 "data_offset": 2048, 00:23:10.439 "data_size": 63488 00:23:10.439 }, 00:23:10.439 { 00:23:10.439 "name": "pt2", 00:23:10.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:10.439 "is_configured": true, 00:23:10.439 "data_offset": 2048, 00:23:10.439 "data_size": 63488 00:23:10.439 }, 00:23:10.439 { 00:23:10.439 "name": "pt3", 00:23:10.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:10.439 "is_configured": true, 00:23:10.439 "data_offset": 2048, 00:23:10.439 "data_size": 63488 00:23:10.439 }, 00:23:10.439 { 00:23:10.439 "name": "pt4", 00:23:10.439 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:10.439 "is_configured": true, 00:23:10.439 "data_offset": 2048, 00:23:10.439 "data_size": 63488 00:23:10.439 } 00:23:10.439 ] 00:23:10.439 }' 00:23:10.439 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:10.439 11:05:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:11.006 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:23:11.006 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:11.006 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:11.006 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:11.006 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:11.006 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:11.006 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:11.006 11:05:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:11.006 [2024-07-25 11:05:18.123766] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:11.265 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:11.265 "name": "raid_bdev1", 00:23:11.265 "aliases": [ 00:23:11.265 "a3ccb77a-cd8c-4751-98c9-2d931a5e6872" 00:23:11.265 ], 00:23:11.265 "product_name": "Raid Volume", 00:23:11.265 "block_size": 512, 00:23:11.265 "num_blocks": 253952, 00:23:11.265 "uuid": "a3ccb77a-cd8c-4751-98c9-2d931a5e6872", 00:23:11.265 "assigned_rate_limits": { 00:23:11.265 "rw_ios_per_sec": 0, 00:23:11.265 "rw_mbytes_per_sec": 0, 00:23:11.265 "r_mbytes_per_sec": 0, 00:23:11.265 "w_mbytes_per_sec": 0 00:23:11.265 }, 00:23:11.265 "claimed": false, 00:23:11.265 "zoned": false, 00:23:11.265 "supported_io_types": { 00:23:11.265 "read": true, 00:23:11.265 "write": true, 00:23:11.265 "unmap": true, 00:23:11.265 "flush": true, 00:23:11.265 "reset": true, 00:23:11.265 "nvme_admin": false, 00:23:11.265 "nvme_io": false, 00:23:11.265 "nvme_io_md": false, 00:23:11.265 "write_zeroes": true, 00:23:11.265 "zcopy": false, 00:23:11.265 "get_zone_info": false, 00:23:11.265 "zone_management": false, 00:23:11.265 "zone_append": false, 00:23:11.265 "compare": false, 00:23:11.265 "compare_and_write": false, 00:23:11.265 "abort": false, 00:23:11.265 "seek_hole": false, 00:23:11.265 "seek_data": false, 00:23:11.265 "copy": false, 00:23:11.265 "nvme_iov_md": false 00:23:11.265 }, 00:23:11.265 "memory_domains": [ 00:23:11.265 { 00:23:11.265 "dma_device_id": "system", 00:23:11.265 "dma_device_type": 1 00:23:11.265 }, 00:23:11.265 { 00:23:11.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.265 "dma_device_type": 2 00:23:11.265 }, 00:23:11.265 { 00:23:11.265 "dma_device_id": "system", 00:23:11.265 "dma_device_type": 1 00:23:11.265 }, 00:23:11.265 { 00:23:11.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.265 "dma_device_type": 2 00:23:11.265 }, 00:23:11.265 { 00:23:11.265 "dma_device_id": "system", 00:23:11.265 "dma_device_type": 1 00:23:11.265 }, 00:23:11.265 { 00:23:11.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.265 "dma_device_type": 2 00:23:11.265 }, 00:23:11.265 { 00:23:11.265 "dma_device_id": "system", 00:23:11.265 "dma_device_type": 1 00:23:11.265 }, 00:23:11.265 { 00:23:11.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.265 "dma_device_type": 2 00:23:11.265 } 00:23:11.265 ], 00:23:11.265 "driver_specific": { 00:23:11.265 "raid": { 00:23:11.265 "uuid": "a3ccb77a-cd8c-4751-98c9-2d931a5e6872", 00:23:11.265 "strip_size_kb": 64, 00:23:11.265 "state": "online", 00:23:11.265 "raid_level": "raid0", 00:23:11.265 "superblock": true, 00:23:11.265 "num_base_bdevs": 4, 00:23:11.265 "num_base_bdevs_discovered": 4, 00:23:11.265 "num_base_bdevs_operational": 4, 00:23:11.265 "base_bdevs_list": [ 00:23:11.265 { 00:23:11.265 "name": "pt1", 00:23:11.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:11.265 "is_configured": true, 00:23:11.265 "data_offset": 2048, 00:23:11.265 "data_size": 63488 00:23:11.265 }, 00:23:11.265 { 00:23:11.265 "name": "pt2", 00:23:11.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:11.265 "is_configured": true, 00:23:11.265 "data_offset": 2048, 00:23:11.265 "data_size": 63488 00:23:11.265 }, 00:23:11.265 { 00:23:11.265 "name": "pt3", 00:23:11.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:11.265 "is_configured": true, 00:23:11.265 "data_offset": 2048, 00:23:11.265 "data_size": 63488 00:23:11.265 }, 00:23:11.265 { 00:23:11.265 "name": "pt4", 00:23:11.265 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:11.265 "is_configured": true, 00:23:11.265 "data_offset": 2048, 00:23:11.265 "data_size": 63488 00:23:11.265 } 00:23:11.265 ] 00:23:11.265 } 00:23:11.265 } 00:23:11.265 }' 00:23:11.265 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:11.265 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:11.265 pt2 00:23:11.265 pt3 00:23:11.265 pt4' 00:23:11.265 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:11.265 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:11.265 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:11.524 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:11.524 "name": "pt1", 00:23:11.524 "aliases": [ 00:23:11.524 "00000000-0000-0000-0000-000000000001" 00:23:11.524 ], 00:23:11.524 "product_name": "passthru", 00:23:11.524 "block_size": 512, 00:23:11.524 "num_blocks": 65536, 00:23:11.524 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:11.524 "assigned_rate_limits": { 00:23:11.524 "rw_ios_per_sec": 0, 00:23:11.524 "rw_mbytes_per_sec": 0, 00:23:11.524 "r_mbytes_per_sec": 0, 00:23:11.524 "w_mbytes_per_sec": 0 00:23:11.524 }, 00:23:11.524 "claimed": true, 00:23:11.524 "claim_type": "exclusive_write", 00:23:11.524 "zoned": false, 00:23:11.524 "supported_io_types": { 00:23:11.524 "read": true, 00:23:11.524 "write": true, 00:23:11.524 "unmap": true, 00:23:11.524 "flush": true, 00:23:11.524 "reset": true, 00:23:11.524 "nvme_admin": false, 00:23:11.524 "nvme_io": false, 00:23:11.524 "nvme_io_md": false, 00:23:11.524 "write_zeroes": true, 00:23:11.524 "zcopy": true, 00:23:11.524 "get_zone_info": false, 00:23:11.524 "zone_management": false, 00:23:11.524 "zone_append": false, 00:23:11.524 "compare": false, 00:23:11.524 "compare_and_write": false, 00:23:11.524 "abort": true, 00:23:11.524 "seek_hole": false, 00:23:11.524 "seek_data": false, 00:23:11.524 "copy": true, 00:23:11.524 "nvme_iov_md": false 00:23:11.524 }, 00:23:11.524 "memory_domains": [ 00:23:11.524 { 00:23:11.524 "dma_device_id": "system", 00:23:11.524 "dma_device_type": 1 00:23:11.524 }, 00:23:11.524 { 00:23:11.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.524 "dma_device_type": 2 00:23:11.524 } 00:23:11.524 ], 00:23:11.524 "driver_specific": { 00:23:11.524 "passthru": { 00:23:11.524 "name": "pt1", 00:23:11.524 "base_bdev_name": "malloc1" 00:23:11.524 } 00:23:11.524 } 00:23:11.524 }' 00:23:11.524 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.524 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.524 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:11.524 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.524 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.524 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:11.524 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.524 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.525 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:11.782 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.782 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.782 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:11.782 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:11.782 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:11.782 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:12.041 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:12.041 "name": "pt2", 00:23:12.041 "aliases": [ 00:23:12.041 "00000000-0000-0000-0000-000000000002" 00:23:12.041 ], 00:23:12.041 "product_name": "passthru", 00:23:12.041 "block_size": 512, 00:23:12.041 "num_blocks": 65536, 00:23:12.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:12.041 "assigned_rate_limits": { 00:23:12.041 "rw_ios_per_sec": 0, 00:23:12.041 "rw_mbytes_per_sec": 0, 00:23:12.041 "r_mbytes_per_sec": 0, 00:23:12.041 "w_mbytes_per_sec": 0 00:23:12.041 }, 00:23:12.041 "claimed": true, 00:23:12.041 "claim_type": "exclusive_write", 00:23:12.041 "zoned": false, 00:23:12.041 "supported_io_types": { 00:23:12.041 "read": true, 00:23:12.041 "write": true, 00:23:12.041 "unmap": true, 00:23:12.041 "flush": true, 00:23:12.041 "reset": true, 00:23:12.041 "nvme_admin": false, 00:23:12.041 "nvme_io": false, 00:23:12.041 "nvme_io_md": false, 00:23:12.041 "write_zeroes": true, 00:23:12.041 "zcopy": true, 00:23:12.041 "get_zone_info": false, 00:23:12.041 "zone_management": false, 00:23:12.041 "zone_append": false, 00:23:12.041 "compare": false, 00:23:12.041 "compare_and_write": false, 00:23:12.041 "abort": true, 00:23:12.041 "seek_hole": false, 00:23:12.041 "seek_data": false, 00:23:12.041 "copy": true, 00:23:12.041 "nvme_iov_md": false 00:23:12.041 }, 00:23:12.041 "memory_domains": [ 00:23:12.041 { 00:23:12.041 "dma_device_id": "system", 00:23:12.041 "dma_device_type": 1 00:23:12.041 }, 00:23:12.041 { 00:23:12.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.041 "dma_device_type": 2 00:23:12.041 } 00:23:12.041 ], 00:23:12.041 "driver_specific": { 00:23:12.041 "passthru": { 00:23:12.041 "name": "pt2", 00:23:12.041 "base_bdev_name": "malloc2" 00:23:12.041 } 00:23:12.041 } 00:23:12.041 }' 00:23:12.041 11:05:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.041 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.041 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:12.041 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.041 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.041 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:12.041 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.299 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.299 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:12.299 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.300 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.300 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:12.300 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:12.300 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:12.300 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:12.558 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:12.558 "name": "pt3", 00:23:12.558 "aliases": [ 00:23:12.558 "00000000-0000-0000-0000-000000000003" 00:23:12.558 ], 00:23:12.558 "product_name": "passthru", 00:23:12.558 "block_size": 512, 00:23:12.558 "num_blocks": 65536, 00:23:12.558 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:12.558 "assigned_rate_limits": { 00:23:12.558 "rw_ios_per_sec": 0, 00:23:12.558 "rw_mbytes_per_sec": 0, 00:23:12.558 "r_mbytes_per_sec": 0, 00:23:12.558 "w_mbytes_per_sec": 0 00:23:12.558 }, 00:23:12.558 "claimed": true, 00:23:12.558 "claim_type": "exclusive_write", 00:23:12.558 "zoned": false, 00:23:12.558 "supported_io_types": { 00:23:12.558 "read": true, 00:23:12.558 "write": true, 00:23:12.558 "unmap": true, 00:23:12.558 "flush": true, 00:23:12.558 "reset": true, 00:23:12.558 "nvme_admin": false, 00:23:12.558 "nvme_io": false, 00:23:12.558 "nvme_io_md": false, 00:23:12.558 "write_zeroes": true, 00:23:12.558 "zcopy": true, 00:23:12.558 "get_zone_info": false, 00:23:12.558 "zone_management": false, 00:23:12.558 "zone_append": false, 00:23:12.558 "compare": false, 00:23:12.558 "compare_and_write": false, 00:23:12.558 "abort": true, 00:23:12.558 "seek_hole": false, 00:23:12.558 "seek_data": false, 00:23:12.558 "copy": true, 00:23:12.558 "nvme_iov_md": false 00:23:12.558 }, 00:23:12.558 "memory_domains": [ 00:23:12.558 { 00:23:12.558 "dma_device_id": "system", 00:23:12.558 "dma_device_type": 1 00:23:12.558 }, 00:23:12.558 { 00:23:12.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.558 "dma_device_type": 2 00:23:12.558 } 00:23:12.558 ], 00:23:12.558 "driver_specific": { 00:23:12.558 "passthru": { 00:23:12.558 "name": "pt3", 00:23:12.558 "base_bdev_name": "malloc3" 00:23:12.558 } 00:23:12.558 } 00:23:12.558 }' 00:23:12.558 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.558 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.558 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:12.558 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.817 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.817 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:12.817 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.817 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.817 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:12.817 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.817 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.817 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:12.817 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:12.817 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:12.817 11:05:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:13.075 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:13.075 "name": "pt4", 00:23:13.075 "aliases": [ 00:23:13.075 "00000000-0000-0000-0000-000000000004" 00:23:13.075 ], 00:23:13.075 "product_name": "passthru", 00:23:13.075 "block_size": 512, 00:23:13.075 "num_blocks": 65536, 00:23:13.075 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:13.075 "assigned_rate_limits": { 00:23:13.075 "rw_ios_per_sec": 0, 00:23:13.075 "rw_mbytes_per_sec": 0, 00:23:13.076 "r_mbytes_per_sec": 0, 00:23:13.076 "w_mbytes_per_sec": 0 00:23:13.076 }, 00:23:13.076 "claimed": true, 00:23:13.076 "claim_type": "exclusive_write", 00:23:13.076 "zoned": false, 00:23:13.076 "supported_io_types": { 00:23:13.076 "read": true, 00:23:13.076 "write": true, 00:23:13.076 "unmap": true, 00:23:13.076 "flush": true, 00:23:13.076 "reset": true, 00:23:13.076 "nvme_admin": false, 00:23:13.076 "nvme_io": false, 00:23:13.076 "nvme_io_md": false, 00:23:13.076 "write_zeroes": true, 00:23:13.076 "zcopy": true, 00:23:13.076 "get_zone_info": false, 00:23:13.076 "zone_management": false, 00:23:13.076 "zone_append": false, 00:23:13.076 "compare": false, 00:23:13.076 "compare_and_write": false, 00:23:13.076 "abort": true, 00:23:13.076 "seek_hole": false, 00:23:13.076 "seek_data": false, 00:23:13.076 "copy": true, 00:23:13.076 "nvme_iov_md": false 00:23:13.076 }, 00:23:13.076 "memory_domains": [ 00:23:13.076 { 00:23:13.076 "dma_device_id": "system", 00:23:13.076 "dma_device_type": 1 00:23:13.076 }, 00:23:13.076 { 00:23:13.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.076 "dma_device_type": 2 00:23:13.076 } 00:23:13.076 ], 00:23:13.076 "driver_specific": { 00:23:13.076 "passthru": { 00:23:13.076 "name": "pt4", 00:23:13.076 "base_bdev_name": "malloc4" 00:23:13.076 } 00:23:13.076 } 00:23:13.076 }' 00:23:13.076 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.076 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:13.334 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:13.334 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:13.334 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:13.334 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:13.334 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:13.334 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:13.334 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:13.334 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:13.334 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:13.592 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:13.592 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:13.592 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:23:13.592 [2024-07-25 11:05:20.690722] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:13.592 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' a3ccb77a-cd8c-4751-98c9-2d931a5e6872 '!=' a3ccb77a-cd8c-4751-98c9-2d931a5e6872 ']' 00:23:13.592 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid0 00:23:13.592 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:13.592 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:13.592 11:05:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3647449 00:23:13.851 11:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3647449 ']' 00:23:13.851 11:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3647449 00:23:13.851 11:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:23:13.851 11:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:13.851 11:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3647449 00:23:13.851 11:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:13.851 11:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:13.851 11:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3647449' 00:23:13.851 killing process with pid 3647449 00:23:13.851 11:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3647449 00:23:13.851 [2024-07-25 11:05:20.767483] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:13.851 [2024-07-25 11:05:20.767574] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:13.851 11:05:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3647449 00:23:13.851 [2024-07-25 11:05:20.767657] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:13.851 [2024-07-25 11:05:20.767673] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:23:14.110 [2024-07-25 11:05:21.212742] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:16.012 11:05:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:23:16.012 00:23:16.012 real 0m17.722s 00:23:16.012 user 0m29.958s 00:23:16.012 sys 0m2.977s 00:23:16.012 11:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:16.012 11:05:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.012 ************************************ 00:23:16.012 END TEST raid_superblock_test 00:23:16.012 ************************************ 00:23:16.012 11:05:22 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:23:16.012 11:05:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:16.012 11:05:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:16.012 11:05:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:16.012 ************************************ 00:23:16.012 START TEST raid_read_error_test 00:23:16.012 ************************************ 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.5ZMiwhaITv 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3650742 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3650742 /var/tmp/spdk-raid.sock 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3650742 ']' 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:16.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.012 11:05:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:16.012 [2024-07-25 11:05:23.079437] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:16.012 [2024-07-25 11:05:23.079560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3650742 ] 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:16.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.271 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:16.272 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:16.272 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:16.272 [2024-07-25 11:05:23.305388] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.531 [2024-07-25 11:05:23.569031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.099 [2024-07-25 11:05:23.912470] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:17.099 [2024-07-25 11:05:23.912507] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:17.099 11:05:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.099 11:05:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:23:17.099 11:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:17.099 11:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:17.357 BaseBdev1_malloc 00:23:17.357 11:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:17.615 true 00:23:17.615 11:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:17.874 [2024-07-25 11:05:24.813261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:17.874 [2024-07-25 11:05:24.813322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.874 [2024-07-25 11:05:24.813349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:23:17.874 [2024-07-25 11:05:24.813371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.874 [2024-07-25 11:05:24.816158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.874 [2024-07-25 11:05:24.816196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:17.874 BaseBdev1 00:23:17.874 11:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:17.874 11:05:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:18.132 BaseBdev2_malloc 00:23:18.132 11:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:18.391 true 00:23:18.391 11:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:18.649 [2024-07-25 11:05:25.529258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:18.649 [2024-07-25 11:05:25.529319] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.649 [2024-07-25 11:05:25.529346] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:23:18.649 [2024-07-25 11:05:25.529368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.649 [2024-07-25 11:05:25.532180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.650 [2024-07-25 11:05:25.532219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:18.650 BaseBdev2 00:23:18.650 11:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:18.650 11:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:18.908 BaseBdev3_malloc 00:23:18.908 11:05:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:19.167 true 00:23:19.167 11:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:19.167 [2024-07-25 11:05:26.261627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:19.167 [2024-07-25 11:05:26.261687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.167 [2024-07-25 11:05:26.261715] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:23:19.167 [2024-07-25 11:05:26.261733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.167 [2024-07-25 11:05:26.264530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.167 [2024-07-25 11:05:26.264567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:19.167 BaseBdev3 00:23:19.167 11:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:19.167 11:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:19.425 BaseBdev4_malloc 00:23:19.684 11:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:19.684 true 00:23:19.684 11:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:19.943 [2024-07-25 11:05:26.981148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:19.943 [2024-07-25 11:05:26.981212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.943 [2024-07-25 11:05:26.981240] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:23:19.943 [2024-07-25 11:05:26.981258] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.943 [2024-07-25 11:05:26.984053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.943 [2024-07-25 11:05:26.984092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:19.943 BaseBdev4 00:23:19.943 11:05:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:20.201 [2024-07-25 11:05:27.197758] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:20.201 [2024-07-25 11:05:27.200118] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:20.201 [2024-07-25 11:05:27.200225] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:20.201 [2024-07-25 11:05:27.200305] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:20.201 [2024-07-25 11:05:27.200568] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:23:20.201 [2024-07-25 11:05:27.200587] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:20.201 [2024-07-25 11:05:27.200957] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:23:20.201 [2024-07-25 11:05:27.201229] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:23:20.201 [2024-07-25 11:05:27.201245] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:23:20.201 [2024-07-25 11:05:27.201479] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:20.201 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:20.202 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:20.202 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:20.202 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:20.202 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:20.202 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:20.202 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:20.202 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:20.202 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:20.202 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:20.202 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.202 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:20.460 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:20.460 "name": "raid_bdev1", 00:23:20.460 "uuid": "a3dd0f7f-5660-44bc-ad74-080ef01e3ea7", 00:23:20.460 "strip_size_kb": 64, 00:23:20.460 "state": "online", 00:23:20.460 "raid_level": "raid0", 00:23:20.460 "superblock": true, 00:23:20.460 "num_base_bdevs": 4, 00:23:20.460 "num_base_bdevs_discovered": 4, 00:23:20.460 "num_base_bdevs_operational": 4, 00:23:20.460 "base_bdevs_list": [ 00:23:20.460 { 00:23:20.460 "name": "BaseBdev1", 00:23:20.460 "uuid": "897ae291-bbd3-5ff4-990b-a571df34f97b", 00:23:20.460 "is_configured": true, 00:23:20.460 "data_offset": 2048, 00:23:20.460 "data_size": 63488 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "name": "BaseBdev2", 00:23:20.460 "uuid": "76a94bdc-8803-5827-82f0-2187b6fd2501", 00:23:20.460 "is_configured": true, 00:23:20.460 "data_offset": 2048, 00:23:20.460 "data_size": 63488 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "name": "BaseBdev3", 00:23:20.460 "uuid": "c5397b6f-e299-5d7e-bc93-76497d413b11", 00:23:20.460 "is_configured": true, 00:23:20.460 "data_offset": 2048, 00:23:20.460 "data_size": 63488 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "name": "BaseBdev4", 00:23:20.460 "uuid": "ea92e857-ad87-5bcc-9fba-467ab2fec334", 00:23:20.460 "is_configured": true, 00:23:20.460 "data_offset": 2048, 00:23:20.460 "data_size": 63488 00:23:20.460 } 00:23:20.460 ] 00:23:20.460 }' 00:23:20.460 11:05:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:20.460 11:05:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:21.026 11:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:23:21.026 11:05:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:21.026 [2024-07-25 11:05:28.126127] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:23:22.004 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.263 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.521 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.521 "name": "raid_bdev1", 00:23:22.521 "uuid": "a3dd0f7f-5660-44bc-ad74-080ef01e3ea7", 00:23:22.521 "strip_size_kb": 64, 00:23:22.521 "state": "online", 00:23:22.521 "raid_level": "raid0", 00:23:22.521 "superblock": true, 00:23:22.521 "num_base_bdevs": 4, 00:23:22.521 "num_base_bdevs_discovered": 4, 00:23:22.521 "num_base_bdevs_operational": 4, 00:23:22.521 "base_bdevs_list": [ 00:23:22.521 { 00:23:22.522 "name": "BaseBdev1", 00:23:22.522 "uuid": "897ae291-bbd3-5ff4-990b-a571df34f97b", 00:23:22.522 "is_configured": true, 00:23:22.522 "data_offset": 2048, 00:23:22.522 "data_size": 63488 00:23:22.522 }, 00:23:22.522 { 00:23:22.522 "name": "BaseBdev2", 00:23:22.522 "uuid": "76a94bdc-8803-5827-82f0-2187b6fd2501", 00:23:22.522 "is_configured": true, 00:23:22.522 "data_offset": 2048, 00:23:22.522 "data_size": 63488 00:23:22.522 }, 00:23:22.522 { 00:23:22.522 "name": "BaseBdev3", 00:23:22.522 "uuid": "c5397b6f-e299-5d7e-bc93-76497d413b11", 00:23:22.522 "is_configured": true, 00:23:22.522 "data_offset": 2048, 00:23:22.522 "data_size": 63488 00:23:22.522 }, 00:23:22.522 { 00:23:22.522 "name": "BaseBdev4", 00:23:22.522 "uuid": "ea92e857-ad87-5bcc-9fba-467ab2fec334", 00:23:22.522 "is_configured": true, 00:23:22.522 "data_offset": 2048, 00:23:22.522 "data_size": 63488 00:23:22.522 } 00:23:22.522 ] 00:23:22.522 }' 00:23:22.522 11:05:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.522 11:05:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.088 11:05:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:23.347 [2024-07-25 11:05:30.274249] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:23.347 [2024-07-25 11:05:30.274292] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:23.347 [2024-07-25 11:05:30.277533] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:23.347 [2024-07-25 11:05:30.277591] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.347 [2024-07-25 11:05:30.277643] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:23.347 [2024-07-25 11:05:30.277668] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:23:23.347 0 00:23:23.347 11:05:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3650742 00:23:23.347 11:05:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3650742 ']' 00:23:23.347 11:05:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3650742 00:23:23.347 11:05:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:23:23.347 11:05:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.347 11:05:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3650742 00:23:23.347 11:05:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:23.347 11:05:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:23.347 11:05:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3650742' 00:23:23.347 killing process with pid 3650742 00:23:23.347 11:05:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3650742 00:23:23.347 [2024-07-25 11:05:30.353158] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:23.347 11:05:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3650742 00:23:23.606 [2024-07-25 11:05:30.701840] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:25.508 11:05:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:23:25.508 11:05:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.5ZMiwhaITv 00:23:25.508 11:05:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:23:25.508 11:05:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:23:25.508 11:05:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:23:25.508 11:05:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:25.508 11:05:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:25.508 11:05:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:23:25.508 00:23:25.508 real 0m9.492s 00:23:25.508 user 0m13.630s 00:23:25.508 sys 0m1.461s 00:23:25.508 11:05:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:25.508 11:05:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.508 ************************************ 00:23:25.508 END TEST raid_read_error_test 00:23:25.508 ************************************ 00:23:25.508 11:05:32 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:23:25.508 11:05:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:25.508 11:05:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:25.508 11:05:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:25.508 ************************************ 00:23:25.508 START TEST raid_write_error_test 00:23:25.508 ************************************ 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid0 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:25.508 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid0 '!=' raid1 ']' 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.pjmE3A8W0l 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3652429 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3652429 /var/tmp/spdk-raid.sock 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3652429 ']' 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:25.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.509 11:05:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:25.768 [2024-07-25 11:05:32.758291] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:25.768 [2024-07-25 11:05:32.758551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652429 ] 00:23:26.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.026 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:26.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.026 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:26.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.026 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:26.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.026 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:26.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.026 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:26.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.026 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:26.027 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:26.027 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:26.027 [2024-07-25 11:05:33.124985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.595 [2024-07-25 11:05:33.414527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.853 [2024-07-25 11:05:33.747459] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.853 [2024-07-25 11:05:33.747496] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.853 11:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.853 11:05:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:23:26.853 11:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:26.853 11:05:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:27.111 BaseBdev1_malloc 00:23:27.111 11:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:27.369 true 00:23:27.369 11:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:27.627 [2024-07-25 11:05:34.640838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:27.627 [2024-07-25 11:05:34.640899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.627 [2024-07-25 11:05:34.640926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:23:27.627 [2024-07-25 11:05:34.640952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.627 [2024-07-25 11:05:34.643742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.627 [2024-07-25 11:05:34.643782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:27.627 BaseBdev1 00:23:27.627 11:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:27.627 11:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:27.885 BaseBdev2_malloc 00:23:27.885 11:05:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:28.144 true 00:23:28.144 11:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:28.402 [2024-07-25 11:05:35.378622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:28.402 [2024-07-25 11:05:35.378684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.402 [2024-07-25 11:05:35.378711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:23:28.402 [2024-07-25 11:05:35.378731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.402 [2024-07-25 11:05:35.381505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.402 [2024-07-25 11:05:35.381544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:28.402 BaseBdev2 00:23:28.402 11:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:28.402 11:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:28.660 BaseBdev3_malloc 00:23:28.660 11:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:28.918 true 00:23:28.918 11:05:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:29.177 [2024-07-25 11:05:36.111314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:29.177 [2024-07-25 11:05:36.111374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.177 [2024-07-25 11:05:36.111402] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:23:29.177 [2024-07-25 11:05:36.111420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.177 [2024-07-25 11:05:36.114220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.177 [2024-07-25 11:05:36.114258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:29.177 BaseBdev3 00:23:29.177 11:05:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:23:29.177 11:05:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:29.435 BaseBdev4_malloc 00:23:29.435 11:05:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:29.693 true 00:23:29.693 11:05:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:29.951 [2024-07-25 11:05:36.828260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:29.951 [2024-07-25 11:05:36.828330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.951 [2024-07-25 11:05:36.828358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:23:29.951 [2024-07-25 11:05:36.828376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.951 [2024-07-25 11:05:36.831184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.951 [2024-07-25 11:05:36.831222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:29.951 BaseBdev4 00:23:29.951 11:05:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:29.951 [2024-07-25 11:05:37.052905] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:29.951 [2024-07-25 11:05:37.055277] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:29.951 [2024-07-25 11:05:37.055373] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:29.951 [2024-07-25 11:05:37.055453] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:29.951 [2024-07-25 11:05:37.055721] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:23:29.952 [2024-07-25 11:05:37.055743] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:29.952 [2024-07-25 11:05:37.056113] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:23:29.952 [2024-07-25 11:05:37.056391] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:23:29.952 [2024-07-25 11:05:37.056408] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:23:29.952 [2024-07-25 11:05:37.056634] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:30.210 "name": "raid_bdev1", 00:23:30.210 "uuid": "36ba48b5-86c6-4e1f-9b74-f8c2291e9e2d", 00:23:30.210 "strip_size_kb": 64, 00:23:30.210 "state": "online", 00:23:30.210 "raid_level": "raid0", 00:23:30.210 "superblock": true, 00:23:30.210 "num_base_bdevs": 4, 00:23:30.210 "num_base_bdevs_discovered": 4, 00:23:30.210 "num_base_bdevs_operational": 4, 00:23:30.210 "base_bdevs_list": [ 00:23:30.210 { 00:23:30.210 "name": "BaseBdev1", 00:23:30.210 "uuid": "644687ac-51d5-5703-ab29-4e48ef69a6f6", 00:23:30.210 "is_configured": true, 00:23:30.210 "data_offset": 2048, 00:23:30.210 "data_size": 63488 00:23:30.210 }, 00:23:30.210 { 00:23:30.210 "name": "BaseBdev2", 00:23:30.210 "uuid": "5a8342e5-b9e6-5255-aa34-5fb1e729b5c2", 00:23:30.210 "is_configured": true, 00:23:30.210 "data_offset": 2048, 00:23:30.210 "data_size": 63488 00:23:30.210 }, 00:23:30.210 { 00:23:30.210 "name": "BaseBdev3", 00:23:30.210 "uuid": "42b9fae6-8e5d-589e-a122-fbbd180521e2", 00:23:30.210 "is_configured": true, 00:23:30.210 "data_offset": 2048, 00:23:30.210 "data_size": 63488 00:23:30.210 }, 00:23:30.210 { 00:23:30.210 "name": "BaseBdev4", 00:23:30.210 "uuid": "45a24445-2538-5087-aa8d-7c4a9a68a988", 00:23:30.210 "is_configured": true, 00:23:30.210 "data_offset": 2048, 00:23:30.210 "data_size": 63488 00:23:30.210 } 00:23:30.210 ] 00:23:30.210 }' 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:30.210 11:05:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:30.778 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:23:30.778 11:05:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:31.036 [2024-07-25 11:05:37.973160] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:23:31.975 11:05:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.233 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:32.233 "name": "raid_bdev1", 00:23:32.233 "uuid": "36ba48b5-86c6-4e1f-9b74-f8c2291e9e2d", 00:23:32.233 "strip_size_kb": 64, 00:23:32.233 "state": "online", 00:23:32.234 "raid_level": "raid0", 00:23:32.234 "superblock": true, 00:23:32.234 "num_base_bdevs": 4, 00:23:32.234 "num_base_bdevs_discovered": 4, 00:23:32.234 "num_base_bdevs_operational": 4, 00:23:32.234 "base_bdevs_list": [ 00:23:32.234 { 00:23:32.234 "name": "BaseBdev1", 00:23:32.234 "uuid": "644687ac-51d5-5703-ab29-4e48ef69a6f6", 00:23:32.234 "is_configured": true, 00:23:32.234 "data_offset": 2048, 00:23:32.234 "data_size": 63488 00:23:32.234 }, 00:23:32.234 { 00:23:32.234 "name": "BaseBdev2", 00:23:32.234 "uuid": "5a8342e5-b9e6-5255-aa34-5fb1e729b5c2", 00:23:32.234 "is_configured": true, 00:23:32.234 "data_offset": 2048, 00:23:32.234 "data_size": 63488 00:23:32.234 }, 00:23:32.234 { 00:23:32.234 "name": "BaseBdev3", 00:23:32.234 "uuid": "42b9fae6-8e5d-589e-a122-fbbd180521e2", 00:23:32.234 "is_configured": true, 00:23:32.234 "data_offset": 2048, 00:23:32.234 "data_size": 63488 00:23:32.234 }, 00:23:32.234 { 00:23:32.234 "name": "BaseBdev4", 00:23:32.234 "uuid": "45a24445-2538-5087-aa8d-7c4a9a68a988", 00:23:32.234 "is_configured": true, 00:23:32.234 "data_offset": 2048, 00:23:32.234 "data_size": 63488 00:23:32.234 } 00:23:32.234 ] 00:23:32.234 }' 00:23:32.234 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:32.234 11:05:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:32.799 11:05:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:33.058 [2024-07-25 11:05:40.117202] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:33.058 [2024-07-25 11:05:40.117244] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:33.058 [2024-07-25 11:05:40.120563] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:33.058 [2024-07-25 11:05:40.120625] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.058 [2024-07-25 11:05:40.120682] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:33.058 [2024-07-25 11:05:40.120706] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:23:33.058 0 00:23:33.058 11:05:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3652429 00:23:33.058 11:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3652429 ']' 00:23:33.058 11:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3652429 00:23:33.058 11:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:23:33.058 11:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.058 11:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3652429 00:23:33.317 11:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:33.317 11:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:33.317 11:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3652429' 00:23:33.317 killing process with pid 3652429 00:23:33.317 11:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3652429 00:23:33.317 [2024-07-25 11:05:40.196495] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:33.317 11:05:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3652429 00:23:33.575 [2024-07-25 11:05:40.553077] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:35.482 11:05:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.pjmE3A8W0l 00:23:35.482 11:05:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:23:35.482 11:05:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:23:35.482 11:05:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:23:35.482 11:05:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid0 00:23:35.482 11:05:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:35.482 11:05:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:35.482 11:05:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:23:35.482 00:23:35.482 real 0m9.802s 00:23:35.482 user 0m13.808s 00:23:35.482 sys 0m1.595s 00:23:35.482 11:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:35.482 11:05:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.482 ************************************ 00:23:35.482 END TEST raid_write_error_test 00:23:35.482 ************************************ 00:23:35.482 11:05:42 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:23:35.482 11:05:42 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:23:35.482 11:05:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:35.482 11:05:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:35.482 11:05:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:35.482 ************************************ 00:23:35.482 START TEST raid_state_function_test 00:23:35.482 ************************************ 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3654112 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3654112' 00:23:35.482 Process raid pid: 3654112 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:35.482 11:05:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3654112 /var/tmp/spdk-raid.sock 00:23:35.483 11:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3654112 ']' 00:23:35.483 11:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:35.483 11:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.483 11:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:35.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:35.483 11:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.483 11:05:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.483 [2024-07-25 11:05:42.528729] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:35.483 [2024-07-25 11:05:42.528847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:01.0 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:01.1 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:01.2 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:01.3 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:01.4 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:01.5 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:01.6 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:01.7 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:02.0 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:02.1 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:02.2 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:02.3 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:02.4 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:02.5 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:02.6 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3d:02.7 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3f:01.0 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3f:01.1 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3f:01.2 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3f:01.3 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3f:01.4 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3f:01.5 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3f:01.6 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.794 EAL: Requested device 0000:3f:01.7 cannot be used 00:23:35.794 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.795 EAL: Requested device 0000:3f:02.0 cannot be used 00:23:35.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.795 EAL: Requested device 0000:3f:02.1 cannot be used 00:23:35.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.795 EAL: Requested device 0000:3f:02.2 cannot be used 00:23:35.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.795 EAL: Requested device 0000:3f:02.3 cannot be used 00:23:35.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.795 EAL: Requested device 0000:3f:02.4 cannot be used 00:23:35.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.795 EAL: Requested device 0000:3f:02.5 cannot be used 00:23:35.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.795 EAL: Requested device 0000:3f:02.6 cannot be used 00:23:35.795 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:23:35.795 EAL: Requested device 0000:3f:02.7 cannot be used 00:23:35.795 [2024-07-25 11:05:42.753111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.054 [2024-07-25 11:05:43.037503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.313 [2024-07-25 11:05:43.387043] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:36.313 [2024-07-25 11:05:43.387078] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:36.572 11:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:36.572 11:05:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:23:36.572 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:36.831 [2024-07-25 11:05:43.788352] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:36.831 [2024-07-25 11:05:43.788407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:36.831 [2024-07-25 11:05:43.788421] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:36.831 [2024-07-25 11:05:43.788437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:36.831 [2024-07-25 11:05:43.788449] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:36.831 [2024-07-25 11:05:43.788465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:36.831 [2024-07-25 11:05:43.788476] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:36.831 [2024-07-25 11:05:43.788491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:36.831 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:36.831 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:36.831 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:36.831 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:36.831 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:36.831 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:36.831 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:36.831 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:36.831 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:36.831 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:36.831 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.831 11:05:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.090 11:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.090 "name": "Existed_Raid", 00:23:37.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.090 "strip_size_kb": 64, 00:23:37.090 "state": "configuring", 00:23:37.090 "raid_level": "concat", 00:23:37.090 "superblock": false, 00:23:37.090 "num_base_bdevs": 4, 00:23:37.090 "num_base_bdevs_discovered": 0, 00:23:37.090 "num_base_bdevs_operational": 4, 00:23:37.090 "base_bdevs_list": [ 00:23:37.090 { 00:23:37.090 "name": "BaseBdev1", 00:23:37.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.090 "is_configured": false, 00:23:37.090 "data_offset": 0, 00:23:37.090 "data_size": 0 00:23:37.090 }, 00:23:37.090 { 00:23:37.090 "name": "BaseBdev2", 00:23:37.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.090 "is_configured": false, 00:23:37.090 "data_offset": 0, 00:23:37.090 "data_size": 0 00:23:37.090 }, 00:23:37.090 { 00:23:37.090 "name": "BaseBdev3", 00:23:37.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.090 "is_configured": false, 00:23:37.090 "data_offset": 0, 00:23:37.090 "data_size": 0 00:23:37.090 }, 00:23:37.090 { 00:23:37.090 "name": "BaseBdev4", 00:23:37.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.090 "is_configured": false, 00:23:37.090 "data_offset": 0, 00:23:37.090 "data_size": 0 00:23:37.090 } 00:23:37.090 ] 00:23:37.090 }' 00:23:37.090 11:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.090 11:05:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:37.658 11:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:37.917 [2024-07-25 11:05:44.839017] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:37.917 [2024-07-25 11:05:44.839060] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:23:37.917 11:05:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:38.177 [2024-07-25 11:05:45.059675] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:38.177 [2024-07-25 11:05:45.059726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:38.177 [2024-07-25 11:05:45.059741] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:38.177 [2024-07-25 11:05:45.059764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:38.177 [2024-07-25 11:05:45.059776] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:38.177 [2024-07-25 11:05:45.059792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:38.177 [2024-07-25 11:05:45.059805] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:38.177 [2024-07-25 11:05:45.059820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:38.177 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:38.436 [2024-07-25 11:05:45.339195] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:38.436 BaseBdev1 00:23:38.436 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:38.436 11:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:38.436 11:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:38.436 11:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:38.436 11:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:38.436 11:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:38.436 11:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:38.696 11:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:38.696 [ 00:23:38.696 { 00:23:38.696 "name": "BaseBdev1", 00:23:38.696 "aliases": [ 00:23:38.696 "969acbf0-df7f-4d95-902f-c0a6e6880356" 00:23:38.696 ], 00:23:38.696 "product_name": "Malloc disk", 00:23:38.696 "block_size": 512, 00:23:38.696 "num_blocks": 65536, 00:23:38.696 "uuid": "969acbf0-df7f-4d95-902f-c0a6e6880356", 00:23:38.696 "assigned_rate_limits": { 00:23:38.696 "rw_ios_per_sec": 0, 00:23:38.696 "rw_mbytes_per_sec": 0, 00:23:38.696 "r_mbytes_per_sec": 0, 00:23:38.696 "w_mbytes_per_sec": 0 00:23:38.696 }, 00:23:38.696 "claimed": true, 00:23:38.696 "claim_type": "exclusive_write", 00:23:38.696 "zoned": false, 00:23:38.696 "supported_io_types": { 00:23:38.696 "read": true, 00:23:38.696 "write": true, 00:23:38.696 "unmap": true, 00:23:38.696 "flush": true, 00:23:38.696 "reset": true, 00:23:38.696 "nvme_admin": false, 00:23:38.696 "nvme_io": false, 00:23:38.696 "nvme_io_md": false, 00:23:38.696 "write_zeroes": true, 00:23:38.696 "zcopy": true, 00:23:38.696 "get_zone_info": false, 00:23:38.696 "zone_management": false, 00:23:38.696 "zone_append": false, 00:23:38.696 "compare": false, 00:23:38.696 "compare_and_write": false, 00:23:38.696 "abort": true, 00:23:38.696 "seek_hole": false, 00:23:38.696 "seek_data": false, 00:23:38.696 "copy": true, 00:23:38.696 "nvme_iov_md": false 00:23:38.696 }, 00:23:38.696 "memory_domains": [ 00:23:38.696 { 00:23:38.696 "dma_device_id": "system", 00:23:38.696 "dma_device_type": 1 00:23:38.696 }, 00:23:38.696 { 00:23:38.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:38.696 "dma_device_type": 2 00:23:38.696 } 00:23:38.696 ], 00:23:38.696 "driver_specific": {} 00:23:38.696 } 00:23:38.696 ] 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.955 11:05:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.955 11:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.955 "name": "Existed_Raid", 00:23:38.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.955 "strip_size_kb": 64, 00:23:38.955 "state": "configuring", 00:23:38.955 "raid_level": "concat", 00:23:38.955 "superblock": false, 00:23:38.955 "num_base_bdevs": 4, 00:23:38.955 "num_base_bdevs_discovered": 1, 00:23:38.955 "num_base_bdevs_operational": 4, 00:23:38.956 "base_bdevs_list": [ 00:23:38.956 { 00:23:38.956 "name": "BaseBdev1", 00:23:38.956 "uuid": "969acbf0-df7f-4d95-902f-c0a6e6880356", 00:23:38.956 "is_configured": true, 00:23:38.956 "data_offset": 0, 00:23:38.956 "data_size": 65536 00:23:38.956 }, 00:23:38.956 { 00:23:38.956 "name": "BaseBdev2", 00:23:38.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.956 "is_configured": false, 00:23:38.956 "data_offset": 0, 00:23:38.956 "data_size": 0 00:23:38.956 }, 00:23:38.956 { 00:23:38.956 "name": "BaseBdev3", 00:23:38.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.956 "is_configured": false, 00:23:38.956 "data_offset": 0, 00:23:38.956 "data_size": 0 00:23:38.956 }, 00:23:38.956 { 00:23:38.956 "name": "BaseBdev4", 00:23:38.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.956 "is_configured": false, 00:23:38.956 "data_offset": 0, 00:23:38.956 "data_size": 0 00:23:38.956 } 00:23:38.956 ] 00:23:38.956 }' 00:23:38.956 11:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.956 11:05:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.521 11:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:39.778 [2024-07-25 11:05:46.775094] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:39.778 [2024-07-25 11:05:46.775157] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:23:39.778 11:05:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:40.037 [2024-07-25 11:05:46.999782] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:40.037 [2024-07-25 11:05:47.002078] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:40.037 [2024-07-25 11:05:47.002121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:40.037 [2024-07-25 11:05:47.002135] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:40.037 [2024-07-25 11:05:47.002159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:40.037 [2024-07-25 11:05:47.002172] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:40.037 [2024-07-25 11:05:47.002193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.037 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.296 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:40.296 "name": "Existed_Raid", 00:23:40.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.296 "strip_size_kb": 64, 00:23:40.296 "state": "configuring", 00:23:40.296 "raid_level": "concat", 00:23:40.296 "superblock": false, 00:23:40.296 "num_base_bdevs": 4, 00:23:40.296 "num_base_bdevs_discovered": 1, 00:23:40.296 "num_base_bdevs_operational": 4, 00:23:40.296 "base_bdevs_list": [ 00:23:40.296 { 00:23:40.296 "name": "BaseBdev1", 00:23:40.296 "uuid": "969acbf0-df7f-4d95-902f-c0a6e6880356", 00:23:40.296 "is_configured": true, 00:23:40.296 "data_offset": 0, 00:23:40.296 "data_size": 65536 00:23:40.296 }, 00:23:40.296 { 00:23:40.296 "name": "BaseBdev2", 00:23:40.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.296 "is_configured": false, 00:23:40.296 "data_offset": 0, 00:23:40.296 "data_size": 0 00:23:40.296 }, 00:23:40.296 { 00:23:40.296 "name": "BaseBdev3", 00:23:40.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.296 "is_configured": false, 00:23:40.296 "data_offset": 0, 00:23:40.296 "data_size": 0 00:23:40.296 }, 00:23:40.296 { 00:23:40.296 "name": "BaseBdev4", 00:23:40.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.296 "is_configured": false, 00:23:40.296 "data_offset": 0, 00:23:40.296 "data_size": 0 00:23:40.296 } 00:23:40.296 ] 00:23:40.296 }' 00:23:40.296 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:40.296 11:05:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:40.863 11:05:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:41.122 [2024-07-25 11:05:48.031374] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:41.122 BaseBdev2 00:23:41.122 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:41.122 11:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:41.122 11:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:41.122 11:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:41.122 11:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:41.122 11:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:41.122 11:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:41.381 11:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:41.381 [ 00:23:41.381 { 00:23:41.381 "name": "BaseBdev2", 00:23:41.381 "aliases": [ 00:23:41.381 "13e814c3-efef-4d1b-9c55-a0ee3fe9ebc6" 00:23:41.381 ], 00:23:41.381 "product_name": "Malloc disk", 00:23:41.381 "block_size": 512, 00:23:41.381 "num_blocks": 65536, 00:23:41.381 "uuid": "13e814c3-efef-4d1b-9c55-a0ee3fe9ebc6", 00:23:41.381 "assigned_rate_limits": { 00:23:41.381 "rw_ios_per_sec": 0, 00:23:41.381 "rw_mbytes_per_sec": 0, 00:23:41.381 "r_mbytes_per_sec": 0, 00:23:41.381 "w_mbytes_per_sec": 0 00:23:41.381 }, 00:23:41.381 "claimed": true, 00:23:41.381 "claim_type": "exclusive_write", 00:23:41.381 "zoned": false, 00:23:41.381 "supported_io_types": { 00:23:41.381 "read": true, 00:23:41.381 "write": true, 00:23:41.381 "unmap": true, 00:23:41.381 "flush": true, 00:23:41.381 "reset": true, 00:23:41.381 "nvme_admin": false, 00:23:41.381 "nvme_io": false, 00:23:41.381 "nvme_io_md": false, 00:23:41.381 "write_zeroes": true, 00:23:41.381 "zcopy": true, 00:23:41.381 "get_zone_info": false, 00:23:41.381 "zone_management": false, 00:23:41.381 "zone_append": false, 00:23:41.381 "compare": false, 00:23:41.381 "compare_and_write": false, 00:23:41.381 "abort": true, 00:23:41.381 "seek_hole": false, 00:23:41.381 "seek_data": false, 00:23:41.381 "copy": true, 00:23:41.381 "nvme_iov_md": false 00:23:41.381 }, 00:23:41.381 "memory_domains": [ 00:23:41.381 { 00:23:41.381 "dma_device_id": "system", 00:23:41.381 "dma_device_type": 1 00:23:41.381 }, 00:23:41.381 { 00:23:41.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:41.381 "dma_device_type": 2 00:23:41.381 } 00:23:41.381 ], 00:23:41.381 "driver_specific": {} 00:23:41.381 } 00:23:41.381 ] 00:23:41.381 11:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:41.640 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:41.640 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:41.640 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:41.640 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:41.640 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:41.640 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:41.640 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:41.640 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:41.640 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.640 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.640 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.640 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.641 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.641 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:41.641 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.641 "name": "Existed_Raid", 00:23:41.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.641 "strip_size_kb": 64, 00:23:41.641 "state": "configuring", 00:23:41.641 "raid_level": "concat", 00:23:41.641 "superblock": false, 00:23:41.641 "num_base_bdevs": 4, 00:23:41.641 "num_base_bdevs_discovered": 2, 00:23:41.641 "num_base_bdevs_operational": 4, 00:23:41.641 "base_bdevs_list": [ 00:23:41.641 { 00:23:41.641 "name": "BaseBdev1", 00:23:41.641 "uuid": "969acbf0-df7f-4d95-902f-c0a6e6880356", 00:23:41.641 "is_configured": true, 00:23:41.641 "data_offset": 0, 00:23:41.641 "data_size": 65536 00:23:41.641 }, 00:23:41.641 { 00:23:41.641 "name": "BaseBdev2", 00:23:41.641 "uuid": "13e814c3-efef-4d1b-9c55-a0ee3fe9ebc6", 00:23:41.641 "is_configured": true, 00:23:41.641 "data_offset": 0, 00:23:41.641 "data_size": 65536 00:23:41.641 }, 00:23:41.641 { 00:23:41.641 "name": "BaseBdev3", 00:23:41.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.641 "is_configured": false, 00:23:41.641 "data_offset": 0, 00:23:41.641 "data_size": 0 00:23:41.641 }, 00:23:41.641 { 00:23:41.641 "name": "BaseBdev4", 00:23:41.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:41.641 "is_configured": false, 00:23:41.641 "data_offset": 0, 00:23:41.641 "data_size": 0 00:23:41.641 } 00:23:41.641 ] 00:23:41.641 }' 00:23:41.641 11:05:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.641 11:05:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:42.208 11:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:42.466 [2024-07-25 11:05:49.552342] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:42.466 BaseBdev3 00:23:42.467 11:05:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:42.467 11:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:42.467 11:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:42.467 11:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:42.467 11:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:42.467 11:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:42.467 11:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:42.725 11:05:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:42.984 [ 00:23:42.984 { 00:23:42.984 "name": "BaseBdev3", 00:23:42.984 "aliases": [ 00:23:42.984 "36bf1e04-0298-45bd-a5c4-84431e18d29e" 00:23:42.984 ], 00:23:42.984 "product_name": "Malloc disk", 00:23:42.984 "block_size": 512, 00:23:42.984 "num_blocks": 65536, 00:23:42.984 "uuid": "36bf1e04-0298-45bd-a5c4-84431e18d29e", 00:23:42.984 "assigned_rate_limits": { 00:23:42.984 "rw_ios_per_sec": 0, 00:23:42.984 "rw_mbytes_per_sec": 0, 00:23:42.984 "r_mbytes_per_sec": 0, 00:23:42.984 "w_mbytes_per_sec": 0 00:23:42.984 }, 00:23:42.984 "claimed": true, 00:23:42.984 "claim_type": "exclusive_write", 00:23:42.984 "zoned": false, 00:23:42.984 "supported_io_types": { 00:23:42.984 "read": true, 00:23:42.984 "write": true, 00:23:42.984 "unmap": true, 00:23:42.984 "flush": true, 00:23:42.984 "reset": true, 00:23:42.984 "nvme_admin": false, 00:23:42.984 "nvme_io": false, 00:23:42.984 "nvme_io_md": false, 00:23:42.984 "write_zeroes": true, 00:23:42.984 "zcopy": true, 00:23:42.984 "get_zone_info": false, 00:23:42.984 "zone_management": false, 00:23:42.984 "zone_append": false, 00:23:42.984 "compare": false, 00:23:42.984 "compare_and_write": false, 00:23:42.984 "abort": true, 00:23:42.984 "seek_hole": false, 00:23:42.984 "seek_data": false, 00:23:42.984 "copy": true, 00:23:42.984 "nvme_iov_md": false 00:23:42.984 }, 00:23:42.984 "memory_domains": [ 00:23:42.984 { 00:23:42.984 "dma_device_id": "system", 00:23:42.984 "dma_device_type": 1 00:23:42.984 }, 00:23:42.984 { 00:23:42.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:42.984 "dma_device_type": 2 00:23:42.984 } 00:23:42.984 ], 00:23:42.984 "driver_specific": {} 00:23:42.984 } 00:23:42.984 ] 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.984 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.985 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:43.243 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:43.243 "name": "Existed_Raid", 00:23:43.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.243 "strip_size_kb": 64, 00:23:43.243 "state": "configuring", 00:23:43.243 "raid_level": "concat", 00:23:43.243 "superblock": false, 00:23:43.243 "num_base_bdevs": 4, 00:23:43.243 "num_base_bdevs_discovered": 3, 00:23:43.243 "num_base_bdevs_operational": 4, 00:23:43.243 "base_bdevs_list": [ 00:23:43.243 { 00:23:43.243 "name": "BaseBdev1", 00:23:43.243 "uuid": "969acbf0-df7f-4d95-902f-c0a6e6880356", 00:23:43.243 "is_configured": true, 00:23:43.243 "data_offset": 0, 00:23:43.243 "data_size": 65536 00:23:43.243 }, 00:23:43.243 { 00:23:43.243 "name": "BaseBdev2", 00:23:43.243 "uuid": "13e814c3-efef-4d1b-9c55-a0ee3fe9ebc6", 00:23:43.243 "is_configured": true, 00:23:43.243 "data_offset": 0, 00:23:43.243 "data_size": 65536 00:23:43.243 }, 00:23:43.243 { 00:23:43.243 "name": "BaseBdev3", 00:23:43.243 "uuid": "36bf1e04-0298-45bd-a5c4-84431e18d29e", 00:23:43.243 "is_configured": true, 00:23:43.243 "data_offset": 0, 00:23:43.243 "data_size": 65536 00:23:43.243 }, 00:23:43.243 { 00:23:43.243 "name": "BaseBdev4", 00:23:43.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.243 "is_configured": false, 00:23:43.243 "data_offset": 0, 00:23:43.243 "data_size": 0 00:23:43.243 } 00:23:43.243 ] 00:23:43.243 }' 00:23:43.243 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:43.243 11:05:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.810 11:05:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:44.068 [2024-07-25 11:05:51.041169] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:44.068 [2024-07-25 11:05:51.041217] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:23:44.068 [2024-07-25 11:05:51.041229] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:44.068 [2024-07-25 11:05:51.041558] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:23:44.068 [2024-07-25 11:05:51.041790] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:23:44.068 [2024-07-25 11:05:51.041808] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:23:44.068 [2024-07-25 11:05:51.042122] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:44.068 BaseBdev4 00:23:44.068 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:44.068 11:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:44.068 11:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:44.068 11:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:44.068 11:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:44.068 11:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:44.068 11:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:44.326 11:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:44.584 [ 00:23:44.584 { 00:23:44.584 "name": "BaseBdev4", 00:23:44.584 "aliases": [ 00:23:44.584 "0bf8ea1b-93ab-4f27-b753-78e75d05d363" 00:23:44.584 ], 00:23:44.584 "product_name": "Malloc disk", 00:23:44.584 "block_size": 512, 00:23:44.584 "num_blocks": 65536, 00:23:44.584 "uuid": "0bf8ea1b-93ab-4f27-b753-78e75d05d363", 00:23:44.584 "assigned_rate_limits": { 00:23:44.584 "rw_ios_per_sec": 0, 00:23:44.584 "rw_mbytes_per_sec": 0, 00:23:44.584 "r_mbytes_per_sec": 0, 00:23:44.584 "w_mbytes_per_sec": 0 00:23:44.584 }, 00:23:44.585 "claimed": true, 00:23:44.585 "claim_type": "exclusive_write", 00:23:44.585 "zoned": false, 00:23:44.585 "supported_io_types": { 00:23:44.585 "read": true, 00:23:44.585 "write": true, 00:23:44.585 "unmap": true, 00:23:44.585 "flush": true, 00:23:44.585 "reset": true, 00:23:44.585 "nvme_admin": false, 00:23:44.585 "nvme_io": false, 00:23:44.585 "nvme_io_md": false, 00:23:44.585 "write_zeroes": true, 00:23:44.585 "zcopy": true, 00:23:44.585 "get_zone_info": false, 00:23:44.585 "zone_management": false, 00:23:44.585 "zone_append": false, 00:23:44.585 "compare": false, 00:23:44.585 "compare_and_write": false, 00:23:44.585 "abort": true, 00:23:44.585 "seek_hole": false, 00:23:44.585 "seek_data": false, 00:23:44.585 "copy": true, 00:23:44.585 "nvme_iov_md": false 00:23:44.585 }, 00:23:44.585 "memory_domains": [ 00:23:44.585 { 00:23:44.585 "dma_device_id": "system", 00:23:44.585 "dma_device_type": 1 00:23:44.585 }, 00:23:44.585 { 00:23:44.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.585 "dma_device_type": 2 00:23:44.585 } 00:23:44.585 ], 00:23:44.585 "driver_specific": {} 00:23:44.585 } 00:23:44.585 ] 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.585 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:44.843 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:44.843 "name": "Existed_Raid", 00:23:44.843 "uuid": "c10b7a21-dd25-4a15-9f26-d332b9edd9a1", 00:23:44.843 "strip_size_kb": 64, 00:23:44.843 "state": "online", 00:23:44.843 "raid_level": "concat", 00:23:44.843 "superblock": false, 00:23:44.843 "num_base_bdevs": 4, 00:23:44.843 "num_base_bdevs_discovered": 4, 00:23:44.843 "num_base_bdevs_operational": 4, 00:23:44.843 "base_bdevs_list": [ 00:23:44.843 { 00:23:44.843 "name": "BaseBdev1", 00:23:44.843 "uuid": "969acbf0-df7f-4d95-902f-c0a6e6880356", 00:23:44.843 "is_configured": true, 00:23:44.843 "data_offset": 0, 00:23:44.843 "data_size": 65536 00:23:44.843 }, 00:23:44.843 { 00:23:44.843 "name": "BaseBdev2", 00:23:44.844 "uuid": "13e814c3-efef-4d1b-9c55-a0ee3fe9ebc6", 00:23:44.844 "is_configured": true, 00:23:44.844 "data_offset": 0, 00:23:44.844 "data_size": 65536 00:23:44.844 }, 00:23:44.844 { 00:23:44.844 "name": "BaseBdev3", 00:23:44.844 "uuid": "36bf1e04-0298-45bd-a5c4-84431e18d29e", 00:23:44.844 "is_configured": true, 00:23:44.844 "data_offset": 0, 00:23:44.844 "data_size": 65536 00:23:44.844 }, 00:23:44.844 { 00:23:44.844 "name": "BaseBdev4", 00:23:44.844 "uuid": "0bf8ea1b-93ab-4f27-b753-78e75d05d363", 00:23:44.844 "is_configured": true, 00:23:44.844 "data_offset": 0, 00:23:44.844 "data_size": 65536 00:23:44.844 } 00:23:44.844 ] 00:23:44.844 }' 00:23:44.844 11:05:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:44.844 11:05:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:45.411 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:45.411 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:45.411 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:45.411 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:45.411 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:45.411 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:45.411 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:45.411 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:45.411 [2024-07-25 11:05:52.469509] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:45.411 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:45.411 "name": "Existed_Raid", 00:23:45.411 "aliases": [ 00:23:45.411 "c10b7a21-dd25-4a15-9f26-d332b9edd9a1" 00:23:45.411 ], 00:23:45.411 "product_name": "Raid Volume", 00:23:45.411 "block_size": 512, 00:23:45.411 "num_blocks": 262144, 00:23:45.411 "uuid": "c10b7a21-dd25-4a15-9f26-d332b9edd9a1", 00:23:45.411 "assigned_rate_limits": { 00:23:45.411 "rw_ios_per_sec": 0, 00:23:45.411 "rw_mbytes_per_sec": 0, 00:23:45.411 "r_mbytes_per_sec": 0, 00:23:45.411 "w_mbytes_per_sec": 0 00:23:45.411 }, 00:23:45.411 "claimed": false, 00:23:45.411 "zoned": false, 00:23:45.411 "supported_io_types": { 00:23:45.411 "read": true, 00:23:45.411 "write": true, 00:23:45.411 "unmap": true, 00:23:45.411 "flush": true, 00:23:45.411 "reset": true, 00:23:45.411 "nvme_admin": false, 00:23:45.411 "nvme_io": false, 00:23:45.411 "nvme_io_md": false, 00:23:45.411 "write_zeroes": true, 00:23:45.411 "zcopy": false, 00:23:45.411 "get_zone_info": false, 00:23:45.411 "zone_management": false, 00:23:45.411 "zone_append": false, 00:23:45.411 "compare": false, 00:23:45.411 "compare_and_write": false, 00:23:45.411 "abort": false, 00:23:45.411 "seek_hole": false, 00:23:45.411 "seek_data": false, 00:23:45.411 "copy": false, 00:23:45.411 "nvme_iov_md": false 00:23:45.411 }, 00:23:45.411 "memory_domains": [ 00:23:45.411 { 00:23:45.411 "dma_device_id": "system", 00:23:45.411 "dma_device_type": 1 00:23:45.412 }, 00:23:45.412 { 00:23:45.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.412 "dma_device_type": 2 00:23:45.412 }, 00:23:45.412 { 00:23:45.412 "dma_device_id": "system", 00:23:45.412 "dma_device_type": 1 00:23:45.412 }, 00:23:45.412 { 00:23:45.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.412 "dma_device_type": 2 00:23:45.412 }, 00:23:45.412 { 00:23:45.412 "dma_device_id": "system", 00:23:45.412 "dma_device_type": 1 00:23:45.412 }, 00:23:45.412 { 00:23:45.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.412 "dma_device_type": 2 00:23:45.412 }, 00:23:45.412 { 00:23:45.412 "dma_device_id": "system", 00:23:45.412 "dma_device_type": 1 00:23:45.412 }, 00:23:45.412 { 00:23:45.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.412 "dma_device_type": 2 00:23:45.412 } 00:23:45.412 ], 00:23:45.412 "driver_specific": { 00:23:45.412 "raid": { 00:23:45.412 "uuid": "c10b7a21-dd25-4a15-9f26-d332b9edd9a1", 00:23:45.412 "strip_size_kb": 64, 00:23:45.412 "state": "online", 00:23:45.412 "raid_level": "concat", 00:23:45.412 "superblock": false, 00:23:45.412 "num_base_bdevs": 4, 00:23:45.412 "num_base_bdevs_discovered": 4, 00:23:45.412 "num_base_bdevs_operational": 4, 00:23:45.412 "base_bdevs_list": [ 00:23:45.412 { 00:23:45.412 "name": "BaseBdev1", 00:23:45.412 "uuid": "969acbf0-df7f-4d95-902f-c0a6e6880356", 00:23:45.412 "is_configured": true, 00:23:45.412 "data_offset": 0, 00:23:45.412 "data_size": 65536 00:23:45.412 }, 00:23:45.412 { 00:23:45.412 "name": "BaseBdev2", 00:23:45.412 "uuid": "13e814c3-efef-4d1b-9c55-a0ee3fe9ebc6", 00:23:45.412 "is_configured": true, 00:23:45.412 "data_offset": 0, 00:23:45.412 "data_size": 65536 00:23:45.412 }, 00:23:45.412 { 00:23:45.412 "name": "BaseBdev3", 00:23:45.412 "uuid": "36bf1e04-0298-45bd-a5c4-84431e18d29e", 00:23:45.412 "is_configured": true, 00:23:45.412 "data_offset": 0, 00:23:45.412 "data_size": 65536 00:23:45.412 }, 00:23:45.412 { 00:23:45.412 "name": "BaseBdev4", 00:23:45.412 "uuid": "0bf8ea1b-93ab-4f27-b753-78e75d05d363", 00:23:45.412 "is_configured": true, 00:23:45.412 "data_offset": 0, 00:23:45.412 "data_size": 65536 00:23:45.412 } 00:23:45.412 ] 00:23:45.412 } 00:23:45.412 } 00:23:45.412 }' 00:23:45.412 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:45.671 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:45.671 BaseBdev2 00:23:45.671 BaseBdev3 00:23:45.671 BaseBdev4' 00:23:45.671 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:45.671 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:45.671 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:45.671 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:45.671 "name": "BaseBdev1", 00:23:45.671 "aliases": [ 00:23:45.671 "969acbf0-df7f-4d95-902f-c0a6e6880356" 00:23:45.671 ], 00:23:45.671 "product_name": "Malloc disk", 00:23:45.671 "block_size": 512, 00:23:45.671 "num_blocks": 65536, 00:23:45.671 "uuid": "969acbf0-df7f-4d95-902f-c0a6e6880356", 00:23:45.671 "assigned_rate_limits": { 00:23:45.671 "rw_ios_per_sec": 0, 00:23:45.671 "rw_mbytes_per_sec": 0, 00:23:45.671 "r_mbytes_per_sec": 0, 00:23:45.671 "w_mbytes_per_sec": 0 00:23:45.671 }, 00:23:45.671 "claimed": true, 00:23:45.671 "claim_type": "exclusive_write", 00:23:45.671 "zoned": false, 00:23:45.671 "supported_io_types": { 00:23:45.671 "read": true, 00:23:45.671 "write": true, 00:23:45.671 "unmap": true, 00:23:45.671 "flush": true, 00:23:45.671 "reset": true, 00:23:45.671 "nvme_admin": false, 00:23:45.671 "nvme_io": false, 00:23:45.671 "nvme_io_md": false, 00:23:45.671 "write_zeroes": true, 00:23:45.671 "zcopy": true, 00:23:45.671 "get_zone_info": false, 00:23:45.671 "zone_management": false, 00:23:45.671 "zone_append": false, 00:23:45.671 "compare": false, 00:23:45.671 "compare_and_write": false, 00:23:45.671 "abort": true, 00:23:45.671 "seek_hole": false, 00:23:45.671 "seek_data": false, 00:23:45.671 "copy": true, 00:23:45.671 "nvme_iov_md": false 00:23:45.671 }, 00:23:45.671 "memory_domains": [ 00:23:45.671 { 00:23:45.671 "dma_device_id": "system", 00:23:45.671 "dma_device_type": 1 00:23:45.671 }, 00:23:45.671 { 00:23:45.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.671 "dma_device_type": 2 00:23:45.671 } 00:23:45.671 ], 00:23:45.671 "driver_specific": {} 00:23:45.671 }' 00:23:45.671 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.671 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.671 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:45.671 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.671 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.930 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:45.930 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.930 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.930 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:45.930 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.930 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.930 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:45.930 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:45.930 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:45.930 11:05:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:46.190 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:46.190 "name": "BaseBdev2", 00:23:46.190 "aliases": [ 00:23:46.190 "13e814c3-efef-4d1b-9c55-a0ee3fe9ebc6" 00:23:46.190 ], 00:23:46.190 "product_name": "Malloc disk", 00:23:46.190 "block_size": 512, 00:23:46.190 "num_blocks": 65536, 00:23:46.190 "uuid": "13e814c3-efef-4d1b-9c55-a0ee3fe9ebc6", 00:23:46.190 "assigned_rate_limits": { 00:23:46.190 "rw_ios_per_sec": 0, 00:23:46.190 "rw_mbytes_per_sec": 0, 00:23:46.190 "r_mbytes_per_sec": 0, 00:23:46.190 "w_mbytes_per_sec": 0 00:23:46.190 }, 00:23:46.190 "claimed": true, 00:23:46.190 "claim_type": "exclusive_write", 00:23:46.190 "zoned": false, 00:23:46.190 "supported_io_types": { 00:23:46.190 "read": true, 00:23:46.190 "write": true, 00:23:46.190 "unmap": true, 00:23:46.190 "flush": true, 00:23:46.190 "reset": true, 00:23:46.190 "nvme_admin": false, 00:23:46.190 "nvme_io": false, 00:23:46.190 "nvme_io_md": false, 00:23:46.190 "write_zeroes": true, 00:23:46.190 "zcopy": true, 00:23:46.190 "get_zone_info": false, 00:23:46.190 "zone_management": false, 00:23:46.190 "zone_append": false, 00:23:46.190 "compare": false, 00:23:46.190 "compare_and_write": false, 00:23:46.190 "abort": true, 00:23:46.190 "seek_hole": false, 00:23:46.190 "seek_data": false, 00:23:46.190 "copy": true, 00:23:46.190 "nvme_iov_md": false 00:23:46.190 }, 00:23:46.190 "memory_domains": [ 00:23:46.190 { 00:23:46.190 "dma_device_id": "system", 00:23:46.190 "dma_device_type": 1 00:23:46.190 }, 00:23:46.190 { 00:23:46.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.190 "dma_device_type": 2 00:23:46.190 } 00:23:46.190 ], 00:23:46.190 "driver_specific": {} 00:23:46.190 }' 00:23:46.190 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.190 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.190 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:46.190 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.447 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.447 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:46.447 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.447 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.447 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:46.447 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.447 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.447 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:46.447 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:46.447 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:46.447 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:46.704 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:46.704 "name": "BaseBdev3", 00:23:46.704 "aliases": [ 00:23:46.704 "36bf1e04-0298-45bd-a5c4-84431e18d29e" 00:23:46.704 ], 00:23:46.704 "product_name": "Malloc disk", 00:23:46.704 "block_size": 512, 00:23:46.704 "num_blocks": 65536, 00:23:46.704 "uuid": "36bf1e04-0298-45bd-a5c4-84431e18d29e", 00:23:46.704 "assigned_rate_limits": { 00:23:46.704 "rw_ios_per_sec": 0, 00:23:46.704 "rw_mbytes_per_sec": 0, 00:23:46.704 "r_mbytes_per_sec": 0, 00:23:46.704 "w_mbytes_per_sec": 0 00:23:46.704 }, 00:23:46.704 "claimed": true, 00:23:46.704 "claim_type": "exclusive_write", 00:23:46.704 "zoned": false, 00:23:46.704 "supported_io_types": { 00:23:46.704 "read": true, 00:23:46.704 "write": true, 00:23:46.704 "unmap": true, 00:23:46.704 "flush": true, 00:23:46.704 "reset": true, 00:23:46.704 "nvme_admin": false, 00:23:46.704 "nvme_io": false, 00:23:46.704 "nvme_io_md": false, 00:23:46.704 "write_zeroes": true, 00:23:46.704 "zcopy": true, 00:23:46.704 "get_zone_info": false, 00:23:46.704 "zone_management": false, 00:23:46.704 "zone_append": false, 00:23:46.704 "compare": false, 00:23:46.704 "compare_and_write": false, 00:23:46.704 "abort": true, 00:23:46.704 "seek_hole": false, 00:23:46.704 "seek_data": false, 00:23:46.704 "copy": true, 00:23:46.704 "nvme_iov_md": false 00:23:46.704 }, 00:23:46.704 "memory_domains": [ 00:23:46.704 { 00:23:46.704 "dma_device_id": "system", 00:23:46.704 "dma_device_type": 1 00:23:46.704 }, 00:23:46.704 { 00:23:46.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.704 "dma_device_type": 2 00:23:46.704 } 00:23:46.704 ], 00:23:46.704 "driver_specific": {} 00:23:46.704 }' 00:23:46.704 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.704 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.705 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:46.705 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.705 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.705 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:46.705 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.963 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.963 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:46.963 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.963 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.963 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:46.963 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:46.963 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:46.963 11:05:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:47.223 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:47.223 "name": "BaseBdev4", 00:23:47.223 "aliases": [ 00:23:47.223 "0bf8ea1b-93ab-4f27-b753-78e75d05d363" 00:23:47.223 ], 00:23:47.223 "product_name": "Malloc disk", 00:23:47.223 "block_size": 512, 00:23:47.223 "num_blocks": 65536, 00:23:47.223 "uuid": "0bf8ea1b-93ab-4f27-b753-78e75d05d363", 00:23:47.223 "assigned_rate_limits": { 00:23:47.223 "rw_ios_per_sec": 0, 00:23:47.223 "rw_mbytes_per_sec": 0, 00:23:47.223 "r_mbytes_per_sec": 0, 00:23:47.223 "w_mbytes_per_sec": 0 00:23:47.223 }, 00:23:47.223 "claimed": true, 00:23:47.223 "claim_type": "exclusive_write", 00:23:47.223 "zoned": false, 00:23:47.223 "supported_io_types": { 00:23:47.223 "read": true, 00:23:47.223 "write": true, 00:23:47.223 "unmap": true, 00:23:47.223 "flush": true, 00:23:47.223 "reset": true, 00:23:47.223 "nvme_admin": false, 00:23:47.223 "nvme_io": false, 00:23:47.223 "nvme_io_md": false, 00:23:47.223 "write_zeroes": true, 00:23:47.223 "zcopy": true, 00:23:47.223 "get_zone_info": false, 00:23:47.223 "zone_management": false, 00:23:47.223 "zone_append": false, 00:23:47.223 "compare": false, 00:23:47.223 "compare_and_write": false, 00:23:47.223 "abort": true, 00:23:47.223 "seek_hole": false, 00:23:47.223 "seek_data": false, 00:23:47.223 "copy": true, 00:23:47.223 "nvme_iov_md": false 00:23:47.223 }, 00:23:47.223 "memory_domains": [ 00:23:47.223 { 00:23:47.223 "dma_device_id": "system", 00:23:47.223 "dma_device_type": 1 00:23:47.223 }, 00:23:47.223 { 00:23:47.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.223 "dma_device_type": 2 00:23:47.223 } 00:23:47.223 ], 00:23:47.223 "driver_specific": {} 00:23:47.223 }' 00:23:47.223 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.223 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.223 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:47.223 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.223 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.223 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:47.223 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.482 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.482 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:47.482 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.482 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.482 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:47.482 11:05:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:48.050 [2024-07-25 11:05:54.947865] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:48.051 [2024-07-25 11:05:54.947900] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:48.051 [2024-07-25 11:05:54.947958] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.051 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.312 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.312 "name": "Existed_Raid", 00:23:48.312 "uuid": "c10b7a21-dd25-4a15-9f26-d332b9edd9a1", 00:23:48.312 "strip_size_kb": 64, 00:23:48.312 "state": "offline", 00:23:48.312 "raid_level": "concat", 00:23:48.312 "superblock": false, 00:23:48.312 "num_base_bdevs": 4, 00:23:48.312 "num_base_bdevs_discovered": 3, 00:23:48.312 "num_base_bdevs_operational": 3, 00:23:48.312 "base_bdevs_list": [ 00:23:48.312 { 00:23:48.312 "name": null, 00:23:48.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.312 "is_configured": false, 00:23:48.312 "data_offset": 0, 00:23:48.312 "data_size": 65536 00:23:48.312 }, 00:23:48.312 { 00:23:48.312 "name": "BaseBdev2", 00:23:48.312 "uuid": "13e814c3-efef-4d1b-9c55-a0ee3fe9ebc6", 00:23:48.312 "is_configured": true, 00:23:48.312 "data_offset": 0, 00:23:48.312 "data_size": 65536 00:23:48.312 }, 00:23:48.312 { 00:23:48.312 "name": "BaseBdev3", 00:23:48.312 "uuid": "36bf1e04-0298-45bd-a5c4-84431e18d29e", 00:23:48.312 "is_configured": true, 00:23:48.312 "data_offset": 0, 00:23:48.312 "data_size": 65536 00:23:48.312 }, 00:23:48.312 { 00:23:48.312 "name": "BaseBdev4", 00:23:48.312 "uuid": "0bf8ea1b-93ab-4f27-b753-78e75d05d363", 00:23:48.312 "is_configured": true, 00:23:48.312 "data_offset": 0, 00:23:48.312 "data_size": 65536 00:23:48.312 } 00:23:48.312 ] 00:23:48.312 }' 00:23:48.312 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.312 11:05:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:48.914 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:48.914 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:48.914 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.914 11:05:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:49.172 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:49.172 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:49.172 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:49.172 [2024-07-25 11:05:56.242885] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:49.430 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:49.430 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:49.430 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.430 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:49.690 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:49.690 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:49.690 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:49.950 [2024-07-25 11:05:56.819375] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:49.950 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:49.950 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:49.950 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.950 11:05:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:50.208 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:50.208 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:50.208 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:50.467 [2024-07-25 11:05:57.388722] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:50.467 [2024-07-25 11:05:57.388775] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:23:50.467 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:50.467 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:50.467 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.467 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:50.725 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:50.725 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:50.725 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:50.725 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:50.725 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:50.725 11:05:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:50.984 BaseBdev2 00:23:50.984 11:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:50.984 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:23:50.984 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:50.984 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:50.984 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:50.984 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:50.984 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:51.243 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:51.502 [ 00:23:51.502 { 00:23:51.502 "name": "BaseBdev2", 00:23:51.502 "aliases": [ 00:23:51.502 "e5f51009-6b88-490f-ae7f-2c8631ce0303" 00:23:51.502 ], 00:23:51.502 "product_name": "Malloc disk", 00:23:51.502 "block_size": 512, 00:23:51.502 "num_blocks": 65536, 00:23:51.502 "uuid": "e5f51009-6b88-490f-ae7f-2c8631ce0303", 00:23:51.502 "assigned_rate_limits": { 00:23:51.502 "rw_ios_per_sec": 0, 00:23:51.502 "rw_mbytes_per_sec": 0, 00:23:51.502 "r_mbytes_per_sec": 0, 00:23:51.502 "w_mbytes_per_sec": 0 00:23:51.502 }, 00:23:51.502 "claimed": false, 00:23:51.502 "zoned": false, 00:23:51.502 "supported_io_types": { 00:23:51.502 "read": true, 00:23:51.502 "write": true, 00:23:51.502 "unmap": true, 00:23:51.502 "flush": true, 00:23:51.502 "reset": true, 00:23:51.503 "nvme_admin": false, 00:23:51.503 "nvme_io": false, 00:23:51.503 "nvme_io_md": false, 00:23:51.503 "write_zeroes": true, 00:23:51.503 "zcopy": true, 00:23:51.503 "get_zone_info": false, 00:23:51.503 "zone_management": false, 00:23:51.503 "zone_append": false, 00:23:51.503 "compare": false, 00:23:51.503 "compare_and_write": false, 00:23:51.503 "abort": true, 00:23:51.503 "seek_hole": false, 00:23:51.503 "seek_data": false, 00:23:51.503 "copy": true, 00:23:51.503 "nvme_iov_md": false 00:23:51.503 }, 00:23:51.503 "memory_domains": [ 00:23:51.503 { 00:23:51.503 "dma_device_id": "system", 00:23:51.503 "dma_device_type": 1 00:23:51.503 }, 00:23:51.503 { 00:23:51.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.503 "dma_device_type": 2 00:23:51.503 } 00:23:51.503 ], 00:23:51.503 "driver_specific": {} 00:23:51.503 } 00:23:51.503 ] 00:23:51.503 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:51.503 11:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:51.503 11:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:51.503 11:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:51.761 BaseBdev3 00:23:51.761 11:05:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:51.761 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:23:51.761 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:51.761 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:51.761 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:51.761 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:51.761 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:52.020 11:05:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:52.278 [ 00:23:52.278 { 00:23:52.278 "name": "BaseBdev3", 00:23:52.278 "aliases": [ 00:23:52.279 "066302d2-e448-4b16-93b2-05154bd56e6d" 00:23:52.279 ], 00:23:52.279 "product_name": "Malloc disk", 00:23:52.279 "block_size": 512, 00:23:52.279 "num_blocks": 65536, 00:23:52.279 "uuid": "066302d2-e448-4b16-93b2-05154bd56e6d", 00:23:52.279 "assigned_rate_limits": { 00:23:52.279 "rw_ios_per_sec": 0, 00:23:52.279 "rw_mbytes_per_sec": 0, 00:23:52.279 "r_mbytes_per_sec": 0, 00:23:52.279 "w_mbytes_per_sec": 0 00:23:52.279 }, 00:23:52.279 "claimed": false, 00:23:52.279 "zoned": false, 00:23:52.279 "supported_io_types": { 00:23:52.279 "read": true, 00:23:52.279 "write": true, 00:23:52.279 "unmap": true, 00:23:52.279 "flush": true, 00:23:52.279 "reset": true, 00:23:52.279 "nvme_admin": false, 00:23:52.279 "nvme_io": false, 00:23:52.279 "nvme_io_md": false, 00:23:52.279 "write_zeroes": true, 00:23:52.279 "zcopy": true, 00:23:52.279 "get_zone_info": false, 00:23:52.279 "zone_management": false, 00:23:52.279 "zone_append": false, 00:23:52.279 "compare": false, 00:23:52.279 "compare_and_write": false, 00:23:52.279 "abort": true, 00:23:52.279 "seek_hole": false, 00:23:52.279 "seek_data": false, 00:23:52.279 "copy": true, 00:23:52.279 "nvme_iov_md": false 00:23:52.279 }, 00:23:52.279 "memory_domains": [ 00:23:52.279 { 00:23:52.279 "dma_device_id": "system", 00:23:52.279 "dma_device_type": 1 00:23:52.279 }, 00:23:52.279 { 00:23:52.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.279 "dma_device_type": 2 00:23:52.279 } 00:23:52.279 ], 00:23:52.279 "driver_specific": {} 00:23:52.279 } 00:23:52.279 ] 00:23:52.279 11:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:52.279 11:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:52.279 11:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:52.279 11:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:52.537 BaseBdev4 00:23:52.537 11:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:52.537 11:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:23:52.537 11:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:52.537 11:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:52.537 11:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:52.537 11:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:52.537 11:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:52.796 11:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:53.055 [ 00:23:53.055 { 00:23:53.055 "name": "BaseBdev4", 00:23:53.055 "aliases": [ 00:23:53.055 "5da55ed0-4f54-4ec8-b167-f74de0a86c1c" 00:23:53.055 ], 00:23:53.055 "product_name": "Malloc disk", 00:23:53.055 "block_size": 512, 00:23:53.055 "num_blocks": 65536, 00:23:53.055 "uuid": "5da55ed0-4f54-4ec8-b167-f74de0a86c1c", 00:23:53.055 "assigned_rate_limits": { 00:23:53.055 "rw_ios_per_sec": 0, 00:23:53.055 "rw_mbytes_per_sec": 0, 00:23:53.055 "r_mbytes_per_sec": 0, 00:23:53.055 "w_mbytes_per_sec": 0 00:23:53.055 }, 00:23:53.055 "claimed": false, 00:23:53.055 "zoned": false, 00:23:53.055 "supported_io_types": { 00:23:53.055 "read": true, 00:23:53.055 "write": true, 00:23:53.055 "unmap": true, 00:23:53.055 "flush": true, 00:23:53.055 "reset": true, 00:23:53.055 "nvme_admin": false, 00:23:53.055 "nvme_io": false, 00:23:53.055 "nvme_io_md": false, 00:23:53.055 "write_zeroes": true, 00:23:53.055 "zcopy": true, 00:23:53.055 "get_zone_info": false, 00:23:53.055 "zone_management": false, 00:23:53.055 "zone_append": false, 00:23:53.055 "compare": false, 00:23:53.055 "compare_and_write": false, 00:23:53.055 "abort": true, 00:23:53.055 "seek_hole": false, 00:23:53.055 "seek_data": false, 00:23:53.055 "copy": true, 00:23:53.055 "nvme_iov_md": false 00:23:53.055 }, 00:23:53.055 "memory_domains": [ 00:23:53.055 { 00:23:53.055 "dma_device_id": "system", 00:23:53.055 "dma_device_type": 1 00:23:53.055 }, 00:23:53.055 { 00:23:53.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.055 "dma_device_type": 2 00:23:53.055 } 00:23:53.055 ], 00:23:53.055 "driver_specific": {} 00:23:53.055 } 00:23:53.055 ] 00:23:53.055 11:05:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:53.055 11:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:53.055 11:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:53.055 11:05:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:53.055 [2024-07-25 11:06:00.140160] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:53.055 [2024-07-25 11:06:00.140208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:53.055 [2024-07-25 11:06:00.140239] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:53.055 [2024-07-25 11:06:00.142570] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:53.055 [2024-07-25 11:06:00.142630] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:53.055 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:53.055 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:53.055 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:53.055 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:53.055 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:53.055 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:53.055 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:53.056 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:53.056 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:53.056 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:53.056 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.056 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:53.315 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:53.315 "name": "Existed_Raid", 00:23:53.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.315 "strip_size_kb": 64, 00:23:53.315 "state": "configuring", 00:23:53.315 "raid_level": "concat", 00:23:53.315 "superblock": false, 00:23:53.315 "num_base_bdevs": 4, 00:23:53.315 "num_base_bdevs_discovered": 3, 00:23:53.315 "num_base_bdevs_operational": 4, 00:23:53.315 "base_bdevs_list": [ 00:23:53.315 { 00:23:53.315 "name": "BaseBdev1", 00:23:53.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:53.315 "is_configured": false, 00:23:53.315 "data_offset": 0, 00:23:53.315 "data_size": 0 00:23:53.315 }, 00:23:53.315 { 00:23:53.315 "name": "BaseBdev2", 00:23:53.315 "uuid": "e5f51009-6b88-490f-ae7f-2c8631ce0303", 00:23:53.315 "is_configured": true, 00:23:53.315 "data_offset": 0, 00:23:53.315 "data_size": 65536 00:23:53.315 }, 00:23:53.315 { 00:23:53.315 "name": "BaseBdev3", 00:23:53.315 "uuid": "066302d2-e448-4b16-93b2-05154bd56e6d", 00:23:53.315 "is_configured": true, 00:23:53.315 "data_offset": 0, 00:23:53.315 "data_size": 65536 00:23:53.315 }, 00:23:53.315 { 00:23:53.315 "name": "BaseBdev4", 00:23:53.315 "uuid": "5da55ed0-4f54-4ec8-b167-f74de0a86c1c", 00:23:53.315 "is_configured": true, 00:23:53.315 "data_offset": 0, 00:23:53.315 "data_size": 65536 00:23:53.315 } 00:23:53.315 ] 00:23:53.315 }' 00:23:53.315 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:53.315 11:06:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:53.882 11:06:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:54.140 [2024-07-25 11:06:01.194968] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:54.140 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:54.140 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:54.140 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:54.140 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:54.140 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:54.140 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:54.140 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.140 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.140 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.140 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.140 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:54.140 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.399 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:54.399 "name": "Existed_Raid", 00:23:54.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.399 "strip_size_kb": 64, 00:23:54.399 "state": "configuring", 00:23:54.399 "raid_level": "concat", 00:23:54.399 "superblock": false, 00:23:54.399 "num_base_bdevs": 4, 00:23:54.399 "num_base_bdevs_discovered": 2, 00:23:54.399 "num_base_bdevs_operational": 4, 00:23:54.399 "base_bdevs_list": [ 00:23:54.399 { 00:23:54.399 "name": "BaseBdev1", 00:23:54.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.399 "is_configured": false, 00:23:54.399 "data_offset": 0, 00:23:54.399 "data_size": 0 00:23:54.399 }, 00:23:54.399 { 00:23:54.399 "name": null, 00:23:54.399 "uuid": "e5f51009-6b88-490f-ae7f-2c8631ce0303", 00:23:54.399 "is_configured": false, 00:23:54.399 "data_offset": 0, 00:23:54.399 "data_size": 65536 00:23:54.399 }, 00:23:54.399 { 00:23:54.399 "name": "BaseBdev3", 00:23:54.399 "uuid": "066302d2-e448-4b16-93b2-05154bd56e6d", 00:23:54.399 "is_configured": true, 00:23:54.399 "data_offset": 0, 00:23:54.399 "data_size": 65536 00:23:54.399 }, 00:23:54.399 { 00:23:54.399 "name": "BaseBdev4", 00:23:54.399 "uuid": "5da55ed0-4f54-4ec8-b167-f74de0a86c1c", 00:23:54.399 "is_configured": true, 00:23:54.399 "data_offset": 0, 00:23:54.399 "data_size": 65536 00:23:54.399 } 00:23:54.399 ] 00:23:54.399 }' 00:23:54.399 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:54.399 11:06:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.966 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.966 11:06:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:55.225 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:55.225 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:55.484 [2024-07-25 11:06:02.444297] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:55.484 BaseBdev1 00:23:55.484 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:55.484 11:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:23:55.484 11:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:23:55.484 11:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:23:55.484 11:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:23:55.484 11:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:23:55.484 11:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:55.742 11:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:56.002 [ 00:23:56.002 { 00:23:56.002 "name": "BaseBdev1", 00:23:56.002 "aliases": [ 00:23:56.002 "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018" 00:23:56.002 ], 00:23:56.002 "product_name": "Malloc disk", 00:23:56.002 "block_size": 512, 00:23:56.002 "num_blocks": 65536, 00:23:56.002 "uuid": "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018", 00:23:56.002 "assigned_rate_limits": { 00:23:56.002 "rw_ios_per_sec": 0, 00:23:56.002 "rw_mbytes_per_sec": 0, 00:23:56.002 "r_mbytes_per_sec": 0, 00:23:56.002 "w_mbytes_per_sec": 0 00:23:56.002 }, 00:23:56.002 "claimed": true, 00:23:56.002 "claim_type": "exclusive_write", 00:23:56.002 "zoned": false, 00:23:56.002 "supported_io_types": { 00:23:56.002 "read": true, 00:23:56.002 "write": true, 00:23:56.002 "unmap": true, 00:23:56.002 "flush": true, 00:23:56.002 "reset": true, 00:23:56.002 "nvme_admin": false, 00:23:56.002 "nvme_io": false, 00:23:56.002 "nvme_io_md": false, 00:23:56.002 "write_zeroes": true, 00:23:56.002 "zcopy": true, 00:23:56.002 "get_zone_info": false, 00:23:56.002 "zone_management": false, 00:23:56.002 "zone_append": false, 00:23:56.002 "compare": false, 00:23:56.002 "compare_and_write": false, 00:23:56.002 "abort": true, 00:23:56.002 "seek_hole": false, 00:23:56.002 "seek_data": false, 00:23:56.002 "copy": true, 00:23:56.002 "nvme_iov_md": false 00:23:56.002 }, 00:23:56.002 "memory_domains": [ 00:23:56.002 { 00:23:56.002 "dma_device_id": "system", 00:23:56.002 "dma_device_type": 1 00:23:56.002 }, 00:23:56.002 { 00:23:56.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.002 "dma_device_type": 2 00:23:56.002 } 00:23:56.002 ], 00:23:56.002 "driver_specific": {} 00:23:56.002 } 00:23:56.002 ] 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.002 11:06:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:56.262 11:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:56.262 "name": "Existed_Raid", 00:23:56.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.262 "strip_size_kb": 64, 00:23:56.262 "state": "configuring", 00:23:56.262 "raid_level": "concat", 00:23:56.262 "superblock": false, 00:23:56.262 "num_base_bdevs": 4, 00:23:56.262 "num_base_bdevs_discovered": 3, 00:23:56.262 "num_base_bdevs_operational": 4, 00:23:56.262 "base_bdevs_list": [ 00:23:56.262 { 00:23:56.262 "name": "BaseBdev1", 00:23:56.262 "uuid": "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018", 00:23:56.262 "is_configured": true, 00:23:56.262 "data_offset": 0, 00:23:56.262 "data_size": 65536 00:23:56.262 }, 00:23:56.262 { 00:23:56.262 "name": null, 00:23:56.262 "uuid": "e5f51009-6b88-490f-ae7f-2c8631ce0303", 00:23:56.262 "is_configured": false, 00:23:56.262 "data_offset": 0, 00:23:56.262 "data_size": 65536 00:23:56.262 }, 00:23:56.262 { 00:23:56.262 "name": "BaseBdev3", 00:23:56.262 "uuid": "066302d2-e448-4b16-93b2-05154bd56e6d", 00:23:56.262 "is_configured": true, 00:23:56.262 "data_offset": 0, 00:23:56.262 "data_size": 65536 00:23:56.262 }, 00:23:56.262 { 00:23:56.262 "name": "BaseBdev4", 00:23:56.262 "uuid": "5da55ed0-4f54-4ec8-b167-f74de0a86c1c", 00:23:56.262 "is_configured": true, 00:23:56.262 "data_offset": 0, 00:23:56.262 "data_size": 65536 00:23:56.262 } 00:23:56.262 ] 00:23:56.262 }' 00:23:56.262 11:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:56.262 11:06:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.831 11:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.831 11:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:56.831 11:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:56.831 11:06:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:57.090 [2024-07-25 11:06:04.040728] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:57.090 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:57.090 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:57.090 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:57.090 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:57.090 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:57.090 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:57.090 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:57.090 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:57.090 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:57.090 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:57.090 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:57.090 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.349 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:57.349 "name": "Existed_Raid", 00:23:57.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.349 "strip_size_kb": 64, 00:23:57.349 "state": "configuring", 00:23:57.349 "raid_level": "concat", 00:23:57.349 "superblock": false, 00:23:57.349 "num_base_bdevs": 4, 00:23:57.349 "num_base_bdevs_discovered": 2, 00:23:57.349 "num_base_bdevs_operational": 4, 00:23:57.349 "base_bdevs_list": [ 00:23:57.349 { 00:23:57.349 "name": "BaseBdev1", 00:23:57.349 "uuid": "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018", 00:23:57.349 "is_configured": true, 00:23:57.349 "data_offset": 0, 00:23:57.349 "data_size": 65536 00:23:57.349 }, 00:23:57.349 { 00:23:57.349 "name": null, 00:23:57.349 "uuid": "e5f51009-6b88-490f-ae7f-2c8631ce0303", 00:23:57.349 "is_configured": false, 00:23:57.349 "data_offset": 0, 00:23:57.349 "data_size": 65536 00:23:57.349 }, 00:23:57.349 { 00:23:57.349 "name": null, 00:23:57.349 "uuid": "066302d2-e448-4b16-93b2-05154bd56e6d", 00:23:57.349 "is_configured": false, 00:23:57.349 "data_offset": 0, 00:23:57.349 "data_size": 65536 00:23:57.349 }, 00:23:57.349 { 00:23:57.349 "name": "BaseBdev4", 00:23:57.349 "uuid": "5da55ed0-4f54-4ec8-b167-f74de0a86c1c", 00:23:57.349 "is_configured": true, 00:23:57.349 "data_offset": 0, 00:23:57.349 "data_size": 65536 00:23:57.349 } 00:23:57.349 ] 00:23:57.349 }' 00:23:57.349 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:57.349 11:06:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:57.917 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.917 11:06:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:58.175 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:58.175 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:58.433 [2024-07-25 11:06:05.316186] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:58.433 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:58.433 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:58.433 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:58.433 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:58.433 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:58.433 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:58.433 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.433 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.433 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.433 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.433 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.433 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:58.691 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.691 "name": "Existed_Raid", 00:23:58.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.691 "strip_size_kb": 64, 00:23:58.691 "state": "configuring", 00:23:58.691 "raid_level": "concat", 00:23:58.691 "superblock": false, 00:23:58.691 "num_base_bdevs": 4, 00:23:58.691 "num_base_bdevs_discovered": 3, 00:23:58.691 "num_base_bdevs_operational": 4, 00:23:58.691 "base_bdevs_list": [ 00:23:58.691 { 00:23:58.691 "name": "BaseBdev1", 00:23:58.691 "uuid": "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018", 00:23:58.691 "is_configured": true, 00:23:58.691 "data_offset": 0, 00:23:58.691 "data_size": 65536 00:23:58.691 }, 00:23:58.691 { 00:23:58.691 "name": null, 00:23:58.691 "uuid": "e5f51009-6b88-490f-ae7f-2c8631ce0303", 00:23:58.691 "is_configured": false, 00:23:58.691 "data_offset": 0, 00:23:58.691 "data_size": 65536 00:23:58.691 }, 00:23:58.691 { 00:23:58.691 "name": "BaseBdev3", 00:23:58.691 "uuid": "066302d2-e448-4b16-93b2-05154bd56e6d", 00:23:58.691 "is_configured": true, 00:23:58.691 "data_offset": 0, 00:23:58.691 "data_size": 65536 00:23:58.691 }, 00:23:58.691 { 00:23:58.691 "name": "BaseBdev4", 00:23:58.691 "uuid": "5da55ed0-4f54-4ec8-b167-f74de0a86c1c", 00:23:58.691 "is_configured": true, 00:23:58.691 "data_offset": 0, 00:23:58.691 "data_size": 65536 00:23:58.691 } 00:23:58.691 ] 00:23:58.691 }' 00:23:58.691 11:06:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.691 11:06:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.259 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:59.259 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.259 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:59.259 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:59.518 [2024-07-25 11:06:06.559577] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:59.777 "name": "Existed_Raid", 00:23:59.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:59.777 "strip_size_kb": 64, 00:23:59.777 "state": "configuring", 00:23:59.777 "raid_level": "concat", 00:23:59.777 "superblock": false, 00:23:59.777 "num_base_bdevs": 4, 00:23:59.777 "num_base_bdevs_discovered": 2, 00:23:59.777 "num_base_bdevs_operational": 4, 00:23:59.777 "base_bdevs_list": [ 00:23:59.777 { 00:23:59.777 "name": null, 00:23:59.777 "uuid": "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018", 00:23:59.777 "is_configured": false, 00:23:59.777 "data_offset": 0, 00:23:59.777 "data_size": 65536 00:23:59.777 }, 00:23:59.777 { 00:23:59.777 "name": null, 00:23:59.777 "uuid": "e5f51009-6b88-490f-ae7f-2c8631ce0303", 00:23:59.777 "is_configured": false, 00:23:59.777 "data_offset": 0, 00:23:59.777 "data_size": 65536 00:23:59.777 }, 00:23:59.777 { 00:23:59.777 "name": "BaseBdev3", 00:23:59.777 "uuid": "066302d2-e448-4b16-93b2-05154bd56e6d", 00:23:59.777 "is_configured": true, 00:23:59.777 "data_offset": 0, 00:23:59.777 "data_size": 65536 00:23:59.777 }, 00:23:59.777 { 00:23:59.777 "name": "BaseBdev4", 00:23:59.777 "uuid": "5da55ed0-4f54-4ec8-b167-f74de0a86c1c", 00:23:59.777 "is_configured": true, 00:23:59.777 "data_offset": 0, 00:23:59.777 "data_size": 65536 00:23:59.777 } 00:23:59.777 ] 00:23:59.777 }' 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:59.777 11:06:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:00.344 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.344 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:00.603 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:00.603 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:00.862 [2024-07-25 11:06:07.833071] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:00.862 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:00.862 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:00.862 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:00.862 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:00.862 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:00.862 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:00.862 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:00.862 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:00.862 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:00.862 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:00.862 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.862 11:06:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:01.121 11:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:01.121 "name": "Existed_Raid", 00:24:01.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.121 "strip_size_kb": 64, 00:24:01.121 "state": "configuring", 00:24:01.121 "raid_level": "concat", 00:24:01.121 "superblock": false, 00:24:01.121 "num_base_bdevs": 4, 00:24:01.121 "num_base_bdevs_discovered": 3, 00:24:01.121 "num_base_bdevs_operational": 4, 00:24:01.121 "base_bdevs_list": [ 00:24:01.121 { 00:24:01.121 "name": null, 00:24:01.121 "uuid": "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018", 00:24:01.121 "is_configured": false, 00:24:01.121 "data_offset": 0, 00:24:01.121 "data_size": 65536 00:24:01.121 }, 00:24:01.121 { 00:24:01.121 "name": "BaseBdev2", 00:24:01.121 "uuid": "e5f51009-6b88-490f-ae7f-2c8631ce0303", 00:24:01.121 "is_configured": true, 00:24:01.121 "data_offset": 0, 00:24:01.121 "data_size": 65536 00:24:01.121 }, 00:24:01.121 { 00:24:01.121 "name": "BaseBdev3", 00:24:01.121 "uuid": "066302d2-e448-4b16-93b2-05154bd56e6d", 00:24:01.121 "is_configured": true, 00:24:01.121 "data_offset": 0, 00:24:01.121 "data_size": 65536 00:24:01.121 }, 00:24:01.121 { 00:24:01.121 "name": "BaseBdev4", 00:24:01.121 "uuid": "5da55ed0-4f54-4ec8-b167-f74de0a86c1c", 00:24:01.121 "is_configured": true, 00:24:01.121 "data_offset": 0, 00:24:01.121 "data_size": 65536 00:24:01.121 } 00:24:01.121 ] 00:24:01.121 }' 00:24:01.121 11:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:01.121 11:06:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.724 11:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.724 11:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:01.982 11:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:01.982 11:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.982 11:06:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:01.982 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d7ea45d2-7ec4-4f22-b27d-1bd7a1042018 00:24:02.241 [2024-07-25 11:06:09.338322] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:02.241 [2024-07-25 11:06:09.338369] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:02.241 [2024-07-25 11:06:09.338381] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:02.241 [2024-07-25 11:06:09.338706] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:24:02.241 [2024-07-25 11:06:09.338909] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:02.241 [2024-07-25 11:06:09.338926] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:24:02.241 [2024-07-25 11:06:09.339253] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.241 NewBaseBdev 00:24:02.241 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:02.241 11:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:24:02.241 11:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:02.241 11:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:24:02.241 11:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:02.241 11:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:02.241 11:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:02.500 11:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:02.759 [ 00:24:02.759 { 00:24:02.759 "name": "NewBaseBdev", 00:24:02.759 "aliases": [ 00:24:02.759 "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018" 00:24:02.759 ], 00:24:02.759 "product_name": "Malloc disk", 00:24:02.759 "block_size": 512, 00:24:02.759 "num_blocks": 65536, 00:24:02.759 "uuid": "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018", 00:24:02.759 "assigned_rate_limits": { 00:24:02.759 "rw_ios_per_sec": 0, 00:24:02.759 "rw_mbytes_per_sec": 0, 00:24:02.759 "r_mbytes_per_sec": 0, 00:24:02.759 "w_mbytes_per_sec": 0 00:24:02.759 }, 00:24:02.759 "claimed": true, 00:24:02.759 "claim_type": "exclusive_write", 00:24:02.759 "zoned": false, 00:24:02.759 "supported_io_types": { 00:24:02.759 "read": true, 00:24:02.759 "write": true, 00:24:02.759 "unmap": true, 00:24:02.759 "flush": true, 00:24:02.759 "reset": true, 00:24:02.759 "nvme_admin": false, 00:24:02.759 "nvme_io": false, 00:24:02.759 "nvme_io_md": false, 00:24:02.759 "write_zeroes": true, 00:24:02.759 "zcopy": true, 00:24:02.759 "get_zone_info": false, 00:24:02.759 "zone_management": false, 00:24:02.759 "zone_append": false, 00:24:02.759 "compare": false, 00:24:02.759 "compare_and_write": false, 00:24:02.759 "abort": true, 00:24:02.759 "seek_hole": false, 00:24:02.759 "seek_data": false, 00:24:02.759 "copy": true, 00:24:02.759 "nvme_iov_md": false 00:24:02.759 }, 00:24:02.759 "memory_domains": [ 00:24:02.759 { 00:24:02.759 "dma_device_id": "system", 00:24:02.759 "dma_device_type": 1 00:24:02.759 }, 00:24:02.759 { 00:24:02.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:02.759 "dma_device_type": 2 00:24:02.759 } 00:24:02.759 ], 00:24:02.759 "driver_specific": {} 00:24:02.759 } 00:24:02.759 ] 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.759 11:06:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:03.018 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:03.018 "name": "Existed_Raid", 00:24:03.018 "uuid": "98d84930-a703-4c5f-8526-43c737869af2", 00:24:03.018 "strip_size_kb": 64, 00:24:03.018 "state": "online", 00:24:03.018 "raid_level": "concat", 00:24:03.018 "superblock": false, 00:24:03.018 "num_base_bdevs": 4, 00:24:03.018 "num_base_bdevs_discovered": 4, 00:24:03.018 "num_base_bdevs_operational": 4, 00:24:03.018 "base_bdevs_list": [ 00:24:03.018 { 00:24:03.018 "name": "NewBaseBdev", 00:24:03.018 "uuid": "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018", 00:24:03.018 "is_configured": true, 00:24:03.018 "data_offset": 0, 00:24:03.018 "data_size": 65536 00:24:03.018 }, 00:24:03.018 { 00:24:03.018 "name": "BaseBdev2", 00:24:03.018 "uuid": "e5f51009-6b88-490f-ae7f-2c8631ce0303", 00:24:03.018 "is_configured": true, 00:24:03.018 "data_offset": 0, 00:24:03.018 "data_size": 65536 00:24:03.018 }, 00:24:03.018 { 00:24:03.018 "name": "BaseBdev3", 00:24:03.018 "uuid": "066302d2-e448-4b16-93b2-05154bd56e6d", 00:24:03.018 "is_configured": true, 00:24:03.018 "data_offset": 0, 00:24:03.018 "data_size": 65536 00:24:03.018 }, 00:24:03.018 { 00:24:03.018 "name": "BaseBdev4", 00:24:03.018 "uuid": "5da55ed0-4f54-4ec8-b167-f74de0a86c1c", 00:24:03.018 "is_configured": true, 00:24:03.018 "data_offset": 0, 00:24:03.018 "data_size": 65536 00:24:03.018 } 00:24:03.018 ] 00:24:03.018 }' 00:24:03.018 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:03.018 11:06:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.586 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:03.586 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:03.586 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:03.586 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:03.586 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:03.586 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:03.586 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:03.586 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:03.845 [2024-07-25 11:06:10.807006] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:03.845 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:03.845 "name": "Existed_Raid", 00:24:03.845 "aliases": [ 00:24:03.845 "98d84930-a703-4c5f-8526-43c737869af2" 00:24:03.845 ], 00:24:03.845 "product_name": "Raid Volume", 00:24:03.845 "block_size": 512, 00:24:03.845 "num_blocks": 262144, 00:24:03.845 "uuid": "98d84930-a703-4c5f-8526-43c737869af2", 00:24:03.845 "assigned_rate_limits": { 00:24:03.845 "rw_ios_per_sec": 0, 00:24:03.845 "rw_mbytes_per_sec": 0, 00:24:03.845 "r_mbytes_per_sec": 0, 00:24:03.845 "w_mbytes_per_sec": 0 00:24:03.845 }, 00:24:03.845 "claimed": false, 00:24:03.845 "zoned": false, 00:24:03.845 "supported_io_types": { 00:24:03.845 "read": true, 00:24:03.845 "write": true, 00:24:03.845 "unmap": true, 00:24:03.845 "flush": true, 00:24:03.845 "reset": true, 00:24:03.845 "nvme_admin": false, 00:24:03.845 "nvme_io": false, 00:24:03.845 "nvme_io_md": false, 00:24:03.845 "write_zeroes": true, 00:24:03.845 "zcopy": false, 00:24:03.845 "get_zone_info": false, 00:24:03.845 "zone_management": false, 00:24:03.845 "zone_append": false, 00:24:03.845 "compare": false, 00:24:03.845 "compare_and_write": false, 00:24:03.845 "abort": false, 00:24:03.845 "seek_hole": false, 00:24:03.845 "seek_data": false, 00:24:03.845 "copy": false, 00:24:03.845 "nvme_iov_md": false 00:24:03.845 }, 00:24:03.845 "memory_domains": [ 00:24:03.845 { 00:24:03.845 "dma_device_id": "system", 00:24:03.845 "dma_device_type": 1 00:24:03.845 }, 00:24:03.845 { 00:24:03.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.845 "dma_device_type": 2 00:24:03.845 }, 00:24:03.845 { 00:24:03.845 "dma_device_id": "system", 00:24:03.845 "dma_device_type": 1 00:24:03.845 }, 00:24:03.845 { 00:24:03.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.845 "dma_device_type": 2 00:24:03.845 }, 00:24:03.845 { 00:24:03.845 "dma_device_id": "system", 00:24:03.845 "dma_device_type": 1 00:24:03.845 }, 00:24:03.845 { 00:24:03.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.845 "dma_device_type": 2 00:24:03.845 }, 00:24:03.845 { 00:24:03.845 "dma_device_id": "system", 00:24:03.845 "dma_device_type": 1 00:24:03.845 }, 00:24:03.845 { 00:24:03.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:03.845 "dma_device_type": 2 00:24:03.845 } 00:24:03.845 ], 00:24:03.845 "driver_specific": { 00:24:03.845 "raid": { 00:24:03.845 "uuid": "98d84930-a703-4c5f-8526-43c737869af2", 00:24:03.845 "strip_size_kb": 64, 00:24:03.845 "state": "online", 00:24:03.845 "raid_level": "concat", 00:24:03.845 "superblock": false, 00:24:03.845 "num_base_bdevs": 4, 00:24:03.845 "num_base_bdevs_discovered": 4, 00:24:03.845 "num_base_bdevs_operational": 4, 00:24:03.845 "base_bdevs_list": [ 00:24:03.845 { 00:24:03.845 "name": "NewBaseBdev", 00:24:03.845 "uuid": "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018", 00:24:03.845 "is_configured": true, 00:24:03.845 "data_offset": 0, 00:24:03.845 "data_size": 65536 00:24:03.845 }, 00:24:03.845 { 00:24:03.845 "name": "BaseBdev2", 00:24:03.845 "uuid": "e5f51009-6b88-490f-ae7f-2c8631ce0303", 00:24:03.845 "is_configured": true, 00:24:03.845 "data_offset": 0, 00:24:03.845 "data_size": 65536 00:24:03.845 }, 00:24:03.845 { 00:24:03.845 "name": "BaseBdev3", 00:24:03.845 "uuid": "066302d2-e448-4b16-93b2-05154bd56e6d", 00:24:03.845 "is_configured": true, 00:24:03.845 "data_offset": 0, 00:24:03.845 "data_size": 65536 00:24:03.845 }, 00:24:03.845 { 00:24:03.845 "name": "BaseBdev4", 00:24:03.845 "uuid": "5da55ed0-4f54-4ec8-b167-f74de0a86c1c", 00:24:03.845 "is_configured": true, 00:24:03.845 "data_offset": 0, 00:24:03.845 "data_size": 65536 00:24:03.845 } 00:24:03.845 ] 00:24:03.845 } 00:24:03.845 } 00:24:03.845 }' 00:24:03.845 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:03.845 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:03.845 BaseBdev2 00:24:03.845 BaseBdev3 00:24:03.845 BaseBdev4' 00:24:03.845 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:03.845 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:03.845 11:06:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:04.104 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:04.104 "name": "NewBaseBdev", 00:24:04.104 "aliases": [ 00:24:04.104 "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018" 00:24:04.104 ], 00:24:04.104 "product_name": "Malloc disk", 00:24:04.104 "block_size": 512, 00:24:04.104 "num_blocks": 65536, 00:24:04.104 "uuid": "d7ea45d2-7ec4-4f22-b27d-1bd7a1042018", 00:24:04.104 "assigned_rate_limits": { 00:24:04.104 "rw_ios_per_sec": 0, 00:24:04.104 "rw_mbytes_per_sec": 0, 00:24:04.104 "r_mbytes_per_sec": 0, 00:24:04.104 "w_mbytes_per_sec": 0 00:24:04.104 }, 00:24:04.104 "claimed": true, 00:24:04.104 "claim_type": "exclusive_write", 00:24:04.104 "zoned": false, 00:24:04.104 "supported_io_types": { 00:24:04.104 "read": true, 00:24:04.104 "write": true, 00:24:04.104 "unmap": true, 00:24:04.104 "flush": true, 00:24:04.104 "reset": true, 00:24:04.104 "nvme_admin": false, 00:24:04.104 "nvme_io": false, 00:24:04.104 "nvme_io_md": false, 00:24:04.104 "write_zeroes": true, 00:24:04.104 "zcopy": true, 00:24:04.104 "get_zone_info": false, 00:24:04.104 "zone_management": false, 00:24:04.104 "zone_append": false, 00:24:04.104 "compare": false, 00:24:04.104 "compare_and_write": false, 00:24:04.104 "abort": true, 00:24:04.104 "seek_hole": false, 00:24:04.104 "seek_data": false, 00:24:04.104 "copy": true, 00:24:04.104 "nvme_iov_md": false 00:24:04.104 }, 00:24:04.104 "memory_domains": [ 00:24:04.104 { 00:24:04.104 "dma_device_id": "system", 00:24:04.104 "dma_device_type": 1 00:24:04.104 }, 00:24:04.104 { 00:24:04.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:04.104 "dma_device_type": 2 00:24:04.104 } 00:24:04.104 ], 00:24:04.104 "driver_specific": {} 00:24:04.104 }' 00:24:04.105 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:04.105 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:04.105 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:04.105 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:04.364 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:04.364 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:04.364 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:04.364 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:04.364 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:04.364 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:04.364 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:04.364 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:04.364 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:04.364 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:04.364 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:04.623 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:04.623 "name": "BaseBdev2", 00:24:04.623 "aliases": [ 00:24:04.623 "e5f51009-6b88-490f-ae7f-2c8631ce0303" 00:24:04.623 ], 00:24:04.623 "product_name": "Malloc disk", 00:24:04.623 "block_size": 512, 00:24:04.623 "num_blocks": 65536, 00:24:04.623 "uuid": "e5f51009-6b88-490f-ae7f-2c8631ce0303", 00:24:04.623 "assigned_rate_limits": { 00:24:04.623 "rw_ios_per_sec": 0, 00:24:04.623 "rw_mbytes_per_sec": 0, 00:24:04.623 "r_mbytes_per_sec": 0, 00:24:04.623 "w_mbytes_per_sec": 0 00:24:04.623 }, 00:24:04.623 "claimed": true, 00:24:04.623 "claim_type": "exclusive_write", 00:24:04.623 "zoned": false, 00:24:04.623 "supported_io_types": { 00:24:04.623 "read": true, 00:24:04.623 "write": true, 00:24:04.623 "unmap": true, 00:24:04.623 "flush": true, 00:24:04.623 "reset": true, 00:24:04.623 "nvme_admin": false, 00:24:04.623 "nvme_io": false, 00:24:04.623 "nvme_io_md": false, 00:24:04.623 "write_zeroes": true, 00:24:04.623 "zcopy": true, 00:24:04.623 "get_zone_info": false, 00:24:04.623 "zone_management": false, 00:24:04.623 "zone_append": false, 00:24:04.623 "compare": false, 00:24:04.623 "compare_and_write": false, 00:24:04.623 "abort": true, 00:24:04.623 "seek_hole": false, 00:24:04.623 "seek_data": false, 00:24:04.623 "copy": true, 00:24:04.623 "nvme_iov_md": false 00:24:04.623 }, 00:24:04.623 "memory_domains": [ 00:24:04.623 { 00:24:04.623 "dma_device_id": "system", 00:24:04.623 "dma_device_type": 1 00:24:04.623 }, 00:24:04.623 { 00:24:04.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:04.623 "dma_device_type": 2 00:24:04.623 } 00:24:04.623 ], 00:24:04.623 "driver_specific": {} 00:24:04.623 }' 00:24:04.623 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:04.623 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:04.882 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:04.882 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:04.882 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:04.882 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:04.882 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:04.882 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:04.882 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:04.882 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:04.882 11:06:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:05.142 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:05.142 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:05.142 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:05.142 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:05.142 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:05.142 "name": "BaseBdev3", 00:24:05.142 "aliases": [ 00:24:05.142 "066302d2-e448-4b16-93b2-05154bd56e6d" 00:24:05.142 ], 00:24:05.142 "product_name": "Malloc disk", 00:24:05.142 "block_size": 512, 00:24:05.142 "num_blocks": 65536, 00:24:05.142 "uuid": "066302d2-e448-4b16-93b2-05154bd56e6d", 00:24:05.142 "assigned_rate_limits": { 00:24:05.142 "rw_ios_per_sec": 0, 00:24:05.142 "rw_mbytes_per_sec": 0, 00:24:05.142 "r_mbytes_per_sec": 0, 00:24:05.142 "w_mbytes_per_sec": 0 00:24:05.142 }, 00:24:05.142 "claimed": true, 00:24:05.142 "claim_type": "exclusive_write", 00:24:05.142 "zoned": false, 00:24:05.142 "supported_io_types": { 00:24:05.142 "read": true, 00:24:05.142 "write": true, 00:24:05.142 "unmap": true, 00:24:05.142 "flush": true, 00:24:05.142 "reset": true, 00:24:05.142 "nvme_admin": false, 00:24:05.142 "nvme_io": false, 00:24:05.142 "nvme_io_md": false, 00:24:05.142 "write_zeroes": true, 00:24:05.142 "zcopy": true, 00:24:05.142 "get_zone_info": false, 00:24:05.142 "zone_management": false, 00:24:05.142 "zone_append": false, 00:24:05.142 "compare": false, 00:24:05.142 "compare_and_write": false, 00:24:05.142 "abort": true, 00:24:05.142 "seek_hole": false, 00:24:05.142 "seek_data": false, 00:24:05.142 "copy": true, 00:24:05.142 "nvme_iov_md": false 00:24:05.142 }, 00:24:05.142 "memory_domains": [ 00:24:05.142 { 00:24:05.142 "dma_device_id": "system", 00:24:05.142 "dma_device_type": 1 00:24:05.142 }, 00:24:05.142 { 00:24:05.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.142 "dma_device_type": 2 00:24:05.142 } 00:24:05.142 ], 00:24:05.142 "driver_specific": {} 00:24:05.142 }' 00:24:05.142 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:05.402 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:05.402 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:05.402 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:05.402 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:05.402 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:05.402 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:05.402 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:05.402 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:05.402 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:05.661 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:05.661 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:05.661 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:05.661 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:05.661 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:05.921 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:05.921 "name": "BaseBdev4", 00:24:05.921 "aliases": [ 00:24:05.921 "5da55ed0-4f54-4ec8-b167-f74de0a86c1c" 00:24:05.921 ], 00:24:05.921 "product_name": "Malloc disk", 00:24:05.921 "block_size": 512, 00:24:05.921 "num_blocks": 65536, 00:24:05.921 "uuid": "5da55ed0-4f54-4ec8-b167-f74de0a86c1c", 00:24:05.921 "assigned_rate_limits": { 00:24:05.921 "rw_ios_per_sec": 0, 00:24:05.921 "rw_mbytes_per_sec": 0, 00:24:05.921 "r_mbytes_per_sec": 0, 00:24:05.921 "w_mbytes_per_sec": 0 00:24:05.921 }, 00:24:05.921 "claimed": true, 00:24:05.921 "claim_type": "exclusive_write", 00:24:05.921 "zoned": false, 00:24:05.921 "supported_io_types": { 00:24:05.921 "read": true, 00:24:05.921 "write": true, 00:24:05.921 "unmap": true, 00:24:05.921 "flush": true, 00:24:05.921 "reset": true, 00:24:05.921 "nvme_admin": false, 00:24:05.921 "nvme_io": false, 00:24:05.921 "nvme_io_md": false, 00:24:05.921 "write_zeroes": true, 00:24:05.921 "zcopy": true, 00:24:05.921 "get_zone_info": false, 00:24:05.921 "zone_management": false, 00:24:05.921 "zone_append": false, 00:24:05.921 "compare": false, 00:24:05.921 "compare_and_write": false, 00:24:05.921 "abort": true, 00:24:05.921 "seek_hole": false, 00:24:05.921 "seek_data": false, 00:24:05.921 "copy": true, 00:24:05.921 "nvme_iov_md": false 00:24:05.921 }, 00:24:05.921 "memory_domains": [ 00:24:05.921 { 00:24:05.921 "dma_device_id": "system", 00:24:05.921 "dma_device_type": 1 00:24:05.921 }, 00:24:05.921 { 00:24:05.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:05.921 "dma_device_type": 2 00:24:05.921 } 00:24:05.921 ], 00:24:05.921 "driver_specific": {} 00:24:05.921 }' 00:24:05.921 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:05.921 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:05.921 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:05.921 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:05.921 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:05.921 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:05.921 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:05.921 11:06:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:05.921 11:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:05.921 11:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:06.180 11:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:06.180 11:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:06.180 11:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:06.445 [2024-07-25 11:06:13.321398] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:06.445 [2024-07-25 11:06:13.321433] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:06.445 [2024-07-25 11:06:13.321514] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:06.445 [2024-07-25 11:06:13.321600] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:06.445 [2024-07-25 11:06:13.321616] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:24:06.445 11:06:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3654112 00:24:06.445 11:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3654112 ']' 00:24:06.445 11:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3654112 00:24:06.446 11:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:24:06.446 11:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.446 11:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3654112 00:24:06.446 11:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:06.446 11:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:06.446 11:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3654112' 00:24:06.446 killing process with pid 3654112 00:24:06.446 11:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3654112 00:24:06.446 [2024-07-25 11:06:13.397738] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:06.446 11:06:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3654112 00:24:07.015 [2024-07-25 11:06:13.857153] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:24:08.921 00:24:08.921 real 0m33.131s 00:24:08.921 user 0m58.079s 00:24:08.921 sys 0m5.623s 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:08.921 ************************************ 00:24:08.921 END TEST raid_state_function_test 00:24:08.921 ************************************ 00:24:08.921 11:06:15 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:24:08.921 11:06:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:24:08.921 11:06:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:08.921 11:06:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:08.921 ************************************ 00:24:08.921 START TEST raid_state_function_test_sb 00:24:08.921 ************************************ 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:08.921 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3660881 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3660881' 00:24:08.922 Process raid pid: 3660881 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3660881 /var/tmp/spdk-raid.sock 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3660881 ']' 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:08.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.922 11:06:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:08.922 [2024-07-25 11:06:15.748551] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:08.922 [2024-07-25 11:06:15.748664] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:08.922 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:08.922 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:08.922 [2024-07-25 11:06:15.964224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.183 [2024-07-25 11:06:16.252149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.751 [2024-07-25 11:06:16.598361] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:09.751 [2024-07-25 11:06:16.598397] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:09.751 11:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:09.751 11:06:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:24:09.751 11:06:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:10.010 [2024-07-25 11:06:16.988550] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:10.010 [2024-07-25 11:06:16.988606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:10.010 [2024-07-25 11:06:16.988620] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:10.010 [2024-07-25 11:06:16.988637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:10.010 [2024-07-25 11:06:16.988648] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:10.010 [2024-07-25 11:06:16.988664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:10.010 [2024-07-25 11:06:16.988675] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:10.010 [2024-07-25 11:06:16.988690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:10.010 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:10.010 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:10.010 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:10.010 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:10.010 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:10.010 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:10.010 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:10.010 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:10.010 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:10.010 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:10.010 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:10.010 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:10.269 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:10.269 "name": "Existed_Raid", 00:24:10.269 "uuid": "d910aba0-39ee-4fb8-951a-51f7c58c7d5f", 00:24:10.269 "strip_size_kb": 64, 00:24:10.269 "state": "configuring", 00:24:10.269 "raid_level": "concat", 00:24:10.269 "superblock": true, 00:24:10.269 "num_base_bdevs": 4, 00:24:10.269 "num_base_bdevs_discovered": 0, 00:24:10.269 "num_base_bdevs_operational": 4, 00:24:10.269 "base_bdevs_list": [ 00:24:10.269 { 00:24:10.269 "name": "BaseBdev1", 00:24:10.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.269 "is_configured": false, 00:24:10.269 "data_offset": 0, 00:24:10.269 "data_size": 0 00:24:10.269 }, 00:24:10.269 { 00:24:10.269 "name": "BaseBdev2", 00:24:10.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.269 "is_configured": false, 00:24:10.269 "data_offset": 0, 00:24:10.269 "data_size": 0 00:24:10.269 }, 00:24:10.269 { 00:24:10.269 "name": "BaseBdev3", 00:24:10.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.269 "is_configured": false, 00:24:10.269 "data_offset": 0, 00:24:10.269 "data_size": 0 00:24:10.269 }, 00:24:10.269 { 00:24:10.269 "name": "BaseBdev4", 00:24:10.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:10.269 "is_configured": false, 00:24:10.269 "data_offset": 0, 00:24:10.269 "data_size": 0 00:24:10.269 } 00:24:10.269 ] 00:24:10.269 }' 00:24:10.269 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:10.269 11:06:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.841 11:06:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:11.100 [2024-07-25 11:06:17.995148] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:11.100 [2024-07-25 11:06:17.995207] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:24:11.101 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:11.101 [2024-07-25 11:06:18.215812] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:11.101 [2024-07-25 11:06:18.215861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:11.101 [2024-07-25 11:06:18.215875] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:11.101 [2024-07-25 11:06:18.215899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:11.101 [2024-07-25 11:06:18.215911] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:11.101 [2024-07-25 11:06:18.215927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:11.101 [2024-07-25 11:06:18.215941] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:11.101 [2024-07-25 11:06:18.215957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:11.360 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:11.619 [2024-07-25 11:06:18.491846] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:11.619 BaseBdev1 00:24:11.619 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:11.619 11:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:11.619 11:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:11.619 11:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:11.619 11:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:11.619 11:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:11.619 11:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:11.619 11:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:11.878 [ 00:24:11.878 { 00:24:11.878 "name": "BaseBdev1", 00:24:11.878 "aliases": [ 00:24:11.878 "cbf937da-9e62-45de-a503-0d667c2b0f62" 00:24:11.878 ], 00:24:11.878 "product_name": "Malloc disk", 00:24:11.878 "block_size": 512, 00:24:11.878 "num_blocks": 65536, 00:24:11.878 "uuid": "cbf937da-9e62-45de-a503-0d667c2b0f62", 00:24:11.878 "assigned_rate_limits": { 00:24:11.878 "rw_ios_per_sec": 0, 00:24:11.878 "rw_mbytes_per_sec": 0, 00:24:11.878 "r_mbytes_per_sec": 0, 00:24:11.878 "w_mbytes_per_sec": 0 00:24:11.878 }, 00:24:11.878 "claimed": true, 00:24:11.878 "claim_type": "exclusive_write", 00:24:11.878 "zoned": false, 00:24:11.878 "supported_io_types": { 00:24:11.878 "read": true, 00:24:11.878 "write": true, 00:24:11.878 "unmap": true, 00:24:11.878 "flush": true, 00:24:11.878 "reset": true, 00:24:11.878 "nvme_admin": false, 00:24:11.879 "nvme_io": false, 00:24:11.879 "nvme_io_md": false, 00:24:11.879 "write_zeroes": true, 00:24:11.879 "zcopy": true, 00:24:11.879 "get_zone_info": false, 00:24:11.879 "zone_management": false, 00:24:11.879 "zone_append": false, 00:24:11.879 "compare": false, 00:24:11.879 "compare_and_write": false, 00:24:11.879 "abort": true, 00:24:11.879 "seek_hole": false, 00:24:11.879 "seek_data": false, 00:24:11.879 "copy": true, 00:24:11.879 "nvme_iov_md": false 00:24:11.879 }, 00:24:11.879 "memory_domains": [ 00:24:11.879 { 00:24:11.879 "dma_device_id": "system", 00:24:11.879 "dma_device_type": 1 00:24:11.879 }, 00:24:11.879 { 00:24:11.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:11.879 "dma_device_type": 2 00:24:11.879 } 00:24:11.879 ], 00:24:11.879 "driver_specific": {} 00:24:11.879 } 00:24:11.879 ] 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.879 11:06:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.138 11:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.138 "name": "Existed_Raid", 00:24:12.138 "uuid": "a494c13a-282a-40bb-b9c6-3a6fb4b4ddcb", 00:24:12.138 "strip_size_kb": 64, 00:24:12.138 "state": "configuring", 00:24:12.138 "raid_level": "concat", 00:24:12.138 "superblock": true, 00:24:12.138 "num_base_bdevs": 4, 00:24:12.138 "num_base_bdevs_discovered": 1, 00:24:12.138 "num_base_bdevs_operational": 4, 00:24:12.138 "base_bdevs_list": [ 00:24:12.138 { 00:24:12.138 "name": "BaseBdev1", 00:24:12.138 "uuid": "cbf937da-9e62-45de-a503-0d667c2b0f62", 00:24:12.138 "is_configured": true, 00:24:12.138 "data_offset": 2048, 00:24:12.138 "data_size": 63488 00:24:12.138 }, 00:24:12.138 { 00:24:12.138 "name": "BaseBdev2", 00:24:12.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.138 "is_configured": false, 00:24:12.138 "data_offset": 0, 00:24:12.138 "data_size": 0 00:24:12.138 }, 00:24:12.138 { 00:24:12.138 "name": "BaseBdev3", 00:24:12.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.138 "is_configured": false, 00:24:12.138 "data_offset": 0, 00:24:12.138 "data_size": 0 00:24:12.138 }, 00:24:12.138 { 00:24:12.138 "name": "BaseBdev4", 00:24:12.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.138 "is_configured": false, 00:24:12.138 "data_offset": 0, 00:24:12.138 "data_size": 0 00:24:12.138 } 00:24:12.138 ] 00:24:12.138 }' 00:24:12.138 11:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.138 11:06:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:12.706 11:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:12.706 [2024-07-25 11:06:19.799458] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:12.706 [2024-07-25 11:06:19.799515] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:24:12.707 11:06:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:12.966 [2024-07-25 11:06:20.024164] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:12.966 [2024-07-25 11:06:20.026502] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:12.966 [2024-07-25 11:06:20.026546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:12.966 [2024-07-25 11:06:20.026561] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:12.966 [2024-07-25 11:06:20.026578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:12.966 [2024-07-25 11:06:20.026590] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:12.966 [2024-07-25 11:06:20.026611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.966 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:13.225 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:13.225 "name": "Existed_Raid", 00:24:13.225 "uuid": "046fafd7-7c66-4324-a192-9420485cbca0", 00:24:13.225 "strip_size_kb": 64, 00:24:13.225 "state": "configuring", 00:24:13.225 "raid_level": "concat", 00:24:13.225 "superblock": true, 00:24:13.225 "num_base_bdevs": 4, 00:24:13.225 "num_base_bdevs_discovered": 1, 00:24:13.225 "num_base_bdevs_operational": 4, 00:24:13.225 "base_bdevs_list": [ 00:24:13.225 { 00:24:13.225 "name": "BaseBdev1", 00:24:13.225 "uuid": "cbf937da-9e62-45de-a503-0d667c2b0f62", 00:24:13.225 "is_configured": true, 00:24:13.225 "data_offset": 2048, 00:24:13.225 "data_size": 63488 00:24:13.225 }, 00:24:13.225 { 00:24:13.225 "name": "BaseBdev2", 00:24:13.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.225 "is_configured": false, 00:24:13.225 "data_offset": 0, 00:24:13.225 "data_size": 0 00:24:13.225 }, 00:24:13.225 { 00:24:13.225 "name": "BaseBdev3", 00:24:13.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.225 "is_configured": false, 00:24:13.225 "data_offset": 0, 00:24:13.225 "data_size": 0 00:24:13.225 }, 00:24:13.225 { 00:24:13.225 "name": "BaseBdev4", 00:24:13.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:13.225 "is_configured": false, 00:24:13.225 "data_offset": 0, 00:24:13.225 "data_size": 0 00:24:13.225 } 00:24:13.225 ] 00:24:13.225 }' 00:24:13.225 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:13.225 11:06:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.793 11:06:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:14.051 [2024-07-25 11:06:21.094841] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:14.051 BaseBdev2 00:24:14.051 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:14.051 11:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:14.051 11:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:14.051 11:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:14.051 11:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:14.051 11:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:14.051 11:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:14.309 11:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:14.567 [ 00:24:14.567 { 00:24:14.567 "name": "BaseBdev2", 00:24:14.567 "aliases": [ 00:24:14.567 "1cc85f79-3df2-4365-a0cf-cb1e69f41573" 00:24:14.567 ], 00:24:14.567 "product_name": "Malloc disk", 00:24:14.567 "block_size": 512, 00:24:14.567 "num_blocks": 65536, 00:24:14.567 "uuid": "1cc85f79-3df2-4365-a0cf-cb1e69f41573", 00:24:14.567 "assigned_rate_limits": { 00:24:14.567 "rw_ios_per_sec": 0, 00:24:14.567 "rw_mbytes_per_sec": 0, 00:24:14.567 "r_mbytes_per_sec": 0, 00:24:14.567 "w_mbytes_per_sec": 0 00:24:14.567 }, 00:24:14.567 "claimed": true, 00:24:14.567 "claim_type": "exclusive_write", 00:24:14.567 "zoned": false, 00:24:14.567 "supported_io_types": { 00:24:14.567 "read": true, 00:24:14.567 "write": true, 00:24:14.567 "unmap": true, 00:24:14.567 "flush": true, 00:24:14.567 "reset": true, 00:24:14.567 "nvme_admin": false, 00:24:14.567 "nvme_io": false, 00:24:14.567 "nvme_io_md": false, 00:24:14.567 "write_zeroes": true, 00:24:14.567 "zcopy": true, 00:24:14.567 "get_zone_info": false, 00:24:14.567 "zone_management": false, 00:24:14.567 "zone_append": false, 00:24:14.567 "compare": false, 00:24:14.567 "compare_and_write": false, 00:24:14.567 "abort": true, 00:24:14.567 "seek_hole": false, 00:24:14.567 "seek_data": false, 00:24:14.567 "copy": true, 00:24:14.567 "nvme_iov_md": false 00:24:14.567 }, 00:24:14.567 "memory_domains": [ 00:24:14.567 { 00:24:14.567 "dma_device_id": "system", 00:24:14.567 "dma_device_type": 1 00:24:14.567 }, 00:24:14.567 { 00:24:14.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:14.567 "dma_device_type": 2 00:24:14.567 } 00:24:14.567 ], 00:24:14.567 "driver_specific": {} 00:24:14.567 } 00:24:14.567 ] 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.567 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.836 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.836 "name": "Existed_Raid", 00:24:14.836 "uuid": "046fafd7-7c66-4324-a192-9420485cbca0", 00:24:14.836 "strip_size_kb": 64, 00:24:14.836 "state": "configuring", 00:24:14.836 "raid_level": "concat", 00:24:14.836 "superblock": true, 00:24:14.836 "num_base_bdevs": 4, 00:24:14.836 "num_base_bdevs_discovered": 2, 00:24:14.836 "num_base_bdevs_operational": 4, 00:24:14.836 "base_bdevs_list": [ 00:24:14.836 { 00:24:14.836 "name": "BaseBdev1", 00:24:14.836 "uuid": "cbf937da-9e62-45de-a503-0d667c2b0f62", 00:24:14.836 "is_configured": true, 00:24:14.836 "data_offset": 2048, 00:24:14.836 "data_size": 63488 00:24:14.836 }, 00:24:14.836 { 00:24:14.836 "name": "BaseBdev2", 00:24:14.836 "uuid": "1cc85f79-3df2-4365-a0cf-cb1e69f41573", 00:24:14.836 "is_configured": true, 00:24:14.836 "data_offset": 2048, 00:24:14.836 "data_size": 63488 00:24:14.836 }, 00:24:14.836 { 00:24:14.836 "name": "BaseBdev3", 00:24:14.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.836 "is_configured": false, 00:24:14.836 "data_offset": 0, 00:24:14.836 "data_size": 0 00:24:14.836 }, 00:24:14.836 { 00:24:14.836 "name": "BaseBdev4", 00:24:14.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.836 "is_configured": false, 00:24:14.836 "data_offset": 0, 00:24:14.836 "data_size": 0 00:24:14.836 } 00:24:14.836 ] 00:24:14.836 }' 00:24:14.836 11:06:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.836 11:06:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.444 11:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:15.703 [2024-07-25 11:06:22.630408] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:15.703 BaseBdev3 00:24:15.703 11:06:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:15.703 11:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:24:15.703 11:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:15.703 11:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:15.703 11:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:15.703 11:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:15.703 11:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:15.961 11:06:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:16.220 [ 00:24:16.220 { 00:24:16.220 "name": "BaseBdev3", 00:24:16.220 "aliases": [ 00:24:16.220 "bee8d838-ce26-4f56-a70f-851d14c24c7a" 00:24:16.220 ], 00:24:16.220 "product_name": "Malloc disk", 00:24:16.220 "block_size": 512, 00:24:16.220 "num_blocks": 65536, 00:24:16.220 "uuid": "bee8d838-ce26-4f56-a70f-851d14c24c7a", 00:24:16.220 "assigned_rate_limits": { 00:24:16.220 "rw_ios_per_sec": 0, 00:24:16.220 "rw_mbytes_per_sec": 0, 00:24:16.220 "r_mbytes_per_sec": 0, 00:24:16.220 "w_mbytes_per_sec": 0 00:24:16.220 }, 00:24:16.220 "claimed": true, 00:24:16.220 "claim_type": "exclusive_write", 00:24:16.220 "zoned": false, 00:24:16.220 "supported_io_types": { 00:24:16.220 "read": true, 00:24:16.220 "write": true, 00:24:16.220 "unmap": true, 00:24:16.220 "flush": true, 00:24:16.220 "reset": true, 00:24:16.220 "nvme_admin": false, 00:24:16.220 "nvme_io": false, 00:24:16.220 "nvme_io_md": false, 00:24:16.220 "write_zeroes": true, 00:24:16.220 "zcopy": true, 00:24:16.220 "get_zone_info": false, 00:24:16.220 "zone_management": false, 00:24:16.220 "zone_append": false, 00:24:16.220 "compare": false, 00:24:16.220 "compare_and_write": false, 00:24:16.220 "abort": true, 00:24:16.220 "seek_hole": false, 00:24:16.220 "seek_data": false, 00:24:16.220 "copy": true, 00:24:16.220 "nvme_iov_md": false 00:24:16.220 }, 00:24:16.220 "memory_domains": [ 00:24:16.220 { 00:24:16.220 "dma_device_id": "system", 00:24:16.220 "dma_device_type": 1 00:24:16.220 }, 00:24:16.220 { 00:24:16.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:16.220 "dma_device_type": 2 00:24:16.220 } 00:24:16.220 ], 00:24:16.220 "driver_specific": {} 00:24:16.220 } 00:24:16.220 ] 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.220 "name": "Existed_Raid", 00:24:16.220 "uuid": "046fafd7-7c66-4324-a192-9420485cbca0", 00:24:16.220 "strip_size_kb": 64, 00:24:16.220 "state": "configuring", 00:24:16.220 "raid_level": "concat", 00:24:16.220 "superblock": true, 00:24:16.220 "num_base_bdevs": 4, 00:24:16.220 "num_base_bdevs_discovered": 3, 00:24:16.220 "num_base_bdevs_operational": 4, 00:24:16.220 "base_bdevs_list": [ 00:24:16.220 { 00:24:16.220 "name": "BaseBdev1", 00:24:16.220 "uuid": "cbf937da-9e62-45de-a503-0d667c2b0f62", 00:24:16.220 "is_configured": true, 00:24:16.220 "data_offset": 2048, 00:24:16.220 "data_size": 63488 00:24:16.220 }, 00:24:16.220 { 00:24:16.220 "name": "BaseBdev2", 00:24:16.220 "uuid": "1cc85f79-3df2-4365-a0cf-cb1e69f41573", 00:24:16.220 "is_configured": true, 00:24:16.220 "data_offset": 2048, 00:24:16.220 "data_size": 63488 00:24:16.220 }, 00:24:16.220 { 00:24:16.220 "name": "BaseBdev3", 00:24:16.220 "uuid": "bee8d838-ce26-4f56-a70f-851d14c24c7a", 00:24:16.220 "is_configured": true, 00:24:16.220 "data_offset": 2048, 00:24:16.220 "data_size": 63488 00:24:16.220 }, 00:24:16.220 { 00:24:16.220 "name": "BaseBdev4", 00:24:16.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.220 "is_configured": false, 00:24:16.220 "data_offset": 0, 00:24:16.220 "data_size": 0 00:24:16.220 } 00:24:16.220 ] 00:24:16.220 }' 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.220 11:06:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.786 11:06:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:17.045 [2024-07-25 11:06:24.155481] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:17.045 [2024-07-25 11:06:24.155750] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:17.045 [2024-07-25 11:06:24.155774] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:17.045 [2024-07-25 11:06:24.156095] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:24:17.045 [2024-07-25 11:06:24.156347] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:17.045 [2024-07-25 11:06:24.156365] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:24:17.045 [2024-07-25 11:06:24.156536] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:17.045 BaseBdev4 00:24:17.304 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:17.304 11:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:24:17.304 11:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:17.304 11:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:17.304 11:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:17.304 11:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:17.304 11:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:17.304 11:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:17.561 [ 00:24:17.561 { 00:24:17.561 "name": "BaseBdev4", 00:24:17.561 "aliases": [ 00:24:17.561 "17db856d-4b89-4e41-a6ce-7fdf58247d96" 00:24:17.561 ], 00:24:17.561 "product_name": "Malloc disk", 00:24:17.561 "block_size": 512, 00:24:17.562 "num_blocks": 65536, 00:24:17.562 "uuid": "17db856d-4b89-4e41-a6ce-7fdf58247d96", 00:24:17.562 "assigned_rate_limits": { 00:24:17.562 "rw_ios_per_sec": 0, 00:24:17.562 "rw_mbytes_per_sec": 0, 00:24:17.562 "r_mbytes_per_sec": 0, 00:24:17.562 "w_mbytes_per_sec": 0 00:24:17.562 }, 00:24:17.562 "claimed": true, 00:24:17.562 "claim_type": "exclusive_write", 00:24:17.562 "zoned": false, 00:24:17.562 "supported_io_types": { 00:24:17.562 "read": true, 00:24:17.562 "write": true, 00:24:17.562 "unmap": true, 00:24:17.562 "flush": true, 00:24:17.562 "reset": true, 00:24:17.562 "nvme_admin": false, 00:24:17.562 "nvme_io": false, 00:24:17.562 "nvme_io_md": false, 00:24:17.562 "write_zeroes": true, 00:24:17.562 "zcopy": true, 00:24:17.562 "get_zone_info": false, 00:24:17.562 "zone_management": false, 00:24:17.562 "zone_append": false, 00:24:17.562 "compare": false, 00:24:17.562 "compare_and_write": false, 00:24:17.562 "abort": true, 00:24:17.562 "seek_hole": false, 00:24:17.562 "seek_data": false, 00:24:17.562 "copy": true, 00:24:17.562 "nvme_iov_md": false 00:24:17.562 }, 00:24:17.562 "memory_domains": [ 00:24:17.562 { 00:24:17.562 "dma_device_id": "system", 00:24:17.562 "dma_device_type": 1 00:24:17.562 }, 00:24:17.562 { 00:24:17.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.562 "dma_device_type": 2 00:24:17.562 } 00:24:17.562 ], 00:24:17.562 "driver_specific": {} 00:24:17.562 } 00:24:17.562 ] 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.562 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:17.820 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:17.820 "name": "Existed_Raid", 00:24:17.820 "uuid": "046fafd7-7c66-4324-a192-9420485cbca0", 00:24:17.820 "strip_size_kb": 64, 00:24:17.820 "state": "online", 00:24:17.820 "raid_level": "concat", 00:24:17.821 "superblock": true, 00:24:17.821 "num_base_bdevs": 4, 00:24:17.821 "num_base_bdevs_discovered": 4, 00:24:17.821 "num_base_bdevs_operational": 4, 00:24:17.821 "base_bdevs_list": [ 00:24:17.821 { 00:24:17.821 "name": "BaseBdev1", 00:24:17.821 "uuid": "cbf937da-9e62-45de-a503-0d667c2b0f62", 00:24:17.821 "is_configured": true, 00:24:17.821 "data_offset": 2048, 00:24:17.821 "data_size": 63488 00:24:17.821 }, 00:24:17.821 { 00:24:17.821 "name": "BaseBdev2", 00:24:17.821 "uuid": "1cc85f79-3df2-4365-a0cf-cb1e69f41573", 00:24:17.821 "is_configured": true, 00:24:17.821 "data_offset": 2048, 00:24:17.821 "data_size": 63488 00:24:17.821 }, 00:24:17.821 { 00:24:17.821 "name": "BaseBdev3", 00:24:17.821 "uuid": "bee8d838-ce26-4f56-a70f-851d14c24c7a", 00:24:17.821 "is_configured": true, 00:24:17.821 "data_offset": 2048, 00:24:17.821 "data_size": 63488 00:24:17.821 }, 00:24:17.821 { 00:24:17.821 "name": "BaseBdev4", 00:24:17.821 "uuid": "17db856d-4b89-4e41-a6ce-7fdf58247d96", 00:24:17.821 "is_configured": true, 00:24:17.821 "data_offset": 2048, 00:24:17.821 "data_size": 63488 00:24:17.821 } 00:24:17.821 ] 00:24:17.821 }' 00:24:17.821 11:06:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:17.821 11:06:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.388 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:18.388 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:18.388 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:18.388 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:18.388 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:18.388 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:18.388 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:18.388 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:18.646 [2024-07-25 11:06:25.579804] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:18.646 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:18.646 "name": "Existed_Raid", 00:24:18.646 "aliases": [ 00:24:18.646 "046fafd7-7c66-4324-a192-9420485cbca0" 00:24:18.646 ], 00:24:18.646 "product_name": "Raid Volume", 00:24:18.646 "block_size": 512, 00:24:18.646 "num_blocks": 253952, 00:24:18.646 "uuid": "046fafd7-7c66-4324-a192-9420485cbca0", 00:24:18.646 "assigned_rate_limits": { 00:24:18.646 "rw_ios_per_sec": 0, 00:24:18.646 "rw_mbytes_per_sec": 0, 00:24:18.646 "r_mbytes_per_sec": 0, 00:24:18.646 "w_mbytes_per_sec": 0 00:24:18.646 }, 00:24:18.646 "claimed": false, 00:24:18.646 "zoned": false, 00:24:18.646 "supported_io_types": { 00:24:18.646 "read": true, 00:24:18.646 "write": true, 00:24:18.646 "unmap": true, 00:24:18.646 "flush": true, 00:24:18.646 "reset": true, 00:24:18.646 "nvme_admin": false, 00:24:18.646 "nvme_io": false, 00:24:18.646 "nvme_io_md": false, 00:24:18.646 "write_zeroes": true, 00:24:18.646 "zcopy": false, 00:24:18.646 "get_zone_info": false, 00:24:18.646 "zone_management": false, 00:24:18.646 "zone_append": false, 00:24:18.646 "compare": false, 00:24:18.646 "compare_and_write": false, 00:24:18.646 "abort": false, 00:24:18.646 "seek_hole": false, 00:24:18.646 "seek_data": false, 00:24:18.646 "copy": false, 00:24:18.646 "nvme_iov_md": false 00:24:18.646 }, 00:24:18.646 "memory_domains": [ 00:24:18.646 { 00:24:18.646 "dma_device_id": "system", 00:24:18.646 "dma_device_type": 1 00:24:18.646 }, 00:24:18.646 { 00:24:18.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.646 "dma_device_type": 2 00:24:18.646 }, 00:24:18.646 { 00:24:18.646 "dma_device_id": "system", 00:24:18.646 "dma_device_type": 1 00:24:18.647 }, 00:24:18.647 { 00:24:18.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.647 "dma_device_type": 2 00:24:18.647 }, 00:24:18.647 { 00:24:18.647 "dma_device_id": "system", 00:24:18.647 "dma_device_type": 1 00:24:18.647 }, 00:24:18.647 { 00:24:18.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.647 "dma_device_type": 2 00:24:18.647 }, 00:24:18.647 { 00:24:18.647 "dma_device_id": "system", 00:24:18.647 "dma_device_type": 1 00:24:18.647 }, 00:24:18.647 { 00:24:18.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.647 "dma_device_type": 2 00:24:18.647 } 00:24:18.647 ], 00:24:18.647 "driver_specific": { 00:24:18.647 "raid": { 00:24:18.647 "uuid": "046fafd7-7c66-4324-a192-9420485cbca0", 00:24:18.647 "strip_size_kb": 64, 00:24:18.647 "state": "online", 00:24:18.647 "raid_level": "concat", 00:24:18.647 "superblock": true, 00:24:18.647 "num_base_bdevs": 4, 00:24:18.647 "num_base_bdevs_discovered": 4, 00:24:18.647 "num_base_bdevs_operational": 4, 00:24:18.647 "base_bdevs_list": [ 00:24:18.647 { 00:24:18.647 "name": "BaseBdev1", 00:24:18.647 "uuid": "cbf937da-9e62-45de-a503-0d667c2b0f62", 00:24:18.647 "is_configured": true, 00:24:18.647 "data_offset": 2048, 00:24:18.647 "data_size": 63488 00:24:18.647 }, 00:24:18.647 { 00:24:18.647 "name": "BaseBdev2", 00:24:18.647 "uuid": "1cc85f79-3df2-4365-a0cf-cb1e69f41573", 00:24:18.647 "is_configured": true, 00:24:18.647 "data_offset": 2048, 00:24:18.647 "data_size": 63488 00:24:18.647 }, 00:24:18.647 { 00:24:18.647 "name": "BaseBdev3", 00:24:18.647 "uuid": "bee8d838-ce26-4f56-a70f-851d14c24c7a", 00:24:18.647 "is_configured": true, 00:24:18.647 "data_offset": 2048, 00:24:18.647 "data_size": 63488 00:24:18.647 }, 00:24:18.647 { 00:24:18.647 "name": "BaseBdev4", 00:24:18.647 "uuid": "17db856d-4b89-4e41-a6ce-7fdf58247d96", 00:24:18.647 "is_configured": true, 00:24:18.647 "data_offset": 2048, 00:24:18.647 "data_size": 63488 00:24:18.647 } 00:24:18.647 ] 00:24:18.647 } 00:24:18.647 } 00:24:18.647 }' 00:24:18.647 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:18.647 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:18.647 BaseBdev2 00:24:18.647 BaseBdev3 00:24:18.647 BaseBdev4' 00:24:18.647 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:18.647 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:18.647 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:18.906 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:18.906 "name": "BaseBdev1", 00:24:18.906 "aliases": [ 00:24:18.906 "cbf937da-9e62-45de-a503-0d667c2b0f62" 00:24:18.906 ], 00:24:18.906 "product_name": "Malloc disk", 00:24:18.906 "block_size": 512, 00:24:18.906 "num_blocks": 65536, 00:24:18.906 "uuid": "cbf937da-9e62-45de-a503-0d667c2b0f62", 00:24:18.906 "assigned_rate_limits": { 00:24:18.906 "rw_ios_per_sec": 0, 00:24:18.906 "rw_mbytes_per_sec": 0, 00:24:18.906 "r_mbytes_per_sec": 0, 00:24:18.906 "w_mbytes_per_sec": 0 00:24:18.906 }, 00:24:18.906 "claimed": true, 00:24:18.906 "claim_type": "exclusive_write", 00:24:18.906 "zoned": false, 00:24:18.906 "supported_io_types": { 00:24:18.906 "read": true, 00:24:18.906 "write": true, 00:24:18.906 "unmap": true, 00:24:18.906 "flush": true, 00:24:18.906 "reset": true, 00:24:18.906 "nvme_admin": false, 00:24:18.906 "nvme_io": false, 00:24:18.906 "nvme_io_md": false, 00:24:18.906 "write_zeroes": true, 00:24:18.906 "zcopy": true, 00:24:18.906 "get_zone_info": false, 00:24:18.906 "zone_management": false, 00:24:18.906 "zone_append": false, 00:24:18.906 "compare": false, 00:24:18.906 "compare_and_write": false, 00:24:18.906 "abort": true, 00:24:18.906 "seek_hole": false, 00:24:18.906 "seek_data": false, 00:24:18.906 "copy": true, 00:24:18.906 "nvme_iov_md": false 00:24:18.906 }, 00:24:18.906 "memory_domains": [ 00:24:18.906 { 00:24:18.906 "dma_device_id": "system", 00:24:18.906 "dma_device_type": 1 00:24:18.906 }, 00:24:18.906 { 00:24:18.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.906 "dma_device_type": 2 00:24:18.906 } 00:24:18.906 ], 00:24:18.906 "driver_specific": {} 00:24:18.906 }' 00:24:18.906 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:18.906 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:18.906 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:18.906 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:18.906 11:06:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:19.165 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:19.165 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:19.165 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:19.165 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:19.165 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:19.165 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:19.165 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:19.165 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:19.165 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:19.165 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:19.423 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:19.423 "name": "BaseBdev2", 00:24:19.423 "aliases": [ 00:24:19.423 "1cc85f79-3df2-4365-a0cf-cb1e69f41573" 00:24:19.423 ], 00:24:19.423 "product_name": "Malloc disk", 00:24:19.423 "block_size": 512, 00:24:19.423 "num_blocks": 65536, 00:24:19.423 "uuid": "1cc85f79-3df2-4365-a0cf-cb1e69f41573", 00:24:19.423 "assigned_rate_limits": { 00:24:19.423 "rw_ios_per_sec": 0, 00:24:19.424 "rw_mbytes_per_sec": 0, 00:24:19.424 "r_mbytes_per_sec": 0, 00:24:19.424 "w_mbytes_per_sec": 0 00:24:19.424 }, 00:24:19.424 "claimed": true, 00:24:19.424 "claim_type": "exclusive_write", 00:24:19.424 "zoned": false, 00:24:19.424 "supported_io_types": { 00:24:19.424 "read": true, 00:24:19.424 "write": true, 00:24:19.424 "unmap": true, 00:24:19.424 "flush": true, 00:24:19.424 "reset": true, 00:24:19.424 "nvme_admin": false, 00:24:19.424 "nvme_io": false, 00:24:19.424 "nvme_io_md": false, 00:24:19.424 "write_zeroes": true, 00:24:19.424 "zcopy": true, 00:24:19.424 "get_zone_info": false, 00:24:19.424 "zone_management": false, 00:24:19.424 "zone_append": false, 00:24:19.424 "compare": false, 00:24:19.424 "compare_and_write": false, 00:24:19.424 "abort": true, 00:24:19.424 "seek_hole": false, 00:24:19.424 "seek_data": false, 00:24:19.424 "copy": true, 00:24:19.424 "nvme_iov_md": false 00:24:19.424 }, 00:24:19.424 "memory_domains": [ 00:24:19.424 { 00:24:19.424 "dma_device_id": "system", 00:24:19.424 "dma_device_type": 1 00:24:19.424 }, 00:24:19.424 { 00:24:19.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.424 "dma_device_type": 2 00:24:19.424 } 00:24:19.424 ], 00:24:19.424 "driver_specific": {} 00:24:19.424 }' 00:24:19.424 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:19.424 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:19.424 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:19.424 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:19.683 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:19.683 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:19.683 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:19.683 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:19.683 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:19.683 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:19.683 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:19.683 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:19.683 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:19.683 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:19.683 11:06:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:19.942 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:19.942 "name": "BaseBdev3", 00:24:19.942 "aliases": [ 00:24:19.942 "bee8d838-ce26-4f56-a70f-851d14c24c7a" 00:24:19.942 ], 00:24:19.942 "product_name": "Malloc disk", 00:24:19.942 "block_size": 512, 00:24:19.942 "num_blocks": 65536, 00:24:19.942 "uuid": "bee8d838-ce26-4f56-a70f-851d14c24c7a", 00:24:19.942 "assigned_rate_limits": { 00:24:19.942 "rw_ios_per_sec": 0, 00:24:19.942 "rw_mbytes_per_sec": 0, 00:24:19.942 "r_mbytes_per_sec": 0, 00:24:19.942 "w_mbytes_per_sec": 0 00:24:19.942 }, 00:24:19.942 "claimed": true, 00:24:19.942 "claim_type": "exclusive_write", 00:24:19.942 "zoned": false, 00:24:19.942 "supported_io_types": { 00:24:19.942 "read": true, 00:24:19.942 "write": true, 00:24:19.942 "unmap": true, 00:24:19.942 "flush": true, 00:24:19.942 "reset": true, 00:24:19.942 "nvme_admin": false, 00:24:19.942 "nvme_io": false, 00:24:19.942 "nvme_io_md": false, 00:24:19.942 "write_zeroes": true, 00:24:19.942 "zcopy": true, 00:24:19.942 "get_zone_info": false, 00:24:19.942 "zone_management": false, 00:24:19.942 "zone_append": false, 00:24:19.942 "compare": false, 00:24:19.942 "compare_and_write": false, 00:24:19.942 "abort": true, 00:24:19.942 "seek_hole": false, 00:24:19.942 "seek_data": false, 00:24:19.942 "copy": true, 00:24:19.942 "nvme_iov_md": false 00:24:19.942 }, 00:24:19.942 "memory_domains": [ 00:24:19.942 { 00:24:19.942 "dma_device_id": "system", 00:24:19.942 "dma_device_type": 1 00:24:19.942 }, 00:24:19.942 { 00:24:19.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.942 "dma_device_type": 2 00:24:19.942 } 00:24:19.942 ], 00:24:19.942 "driver_specific": {} 00:24:19.942 }' 00:24:19.942 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:19.942 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:20.202 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:20.202 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:20.202 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:20.202 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:20.202 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:20.202 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:20.202 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:20.202 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:20.202 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:20.461 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:20.461 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:20.461 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:20.461 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:20.461 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:20.461 "name": "BaseBdev4", 00:24:20.461 "aliases": [ 00:24:20.461 "17db856d-4b89-4e41-a6ce-7fdf58247d96" 00:24:20.461 ], 00:24:20.461 "product_name": "Malloc disk", 00:24:20.461 "block_size": 512, 00:24:20.461 "num_blocks": 65536, 00:24:20.461 "uuid": "17db856d-4b89-4e41-a6ce-7fdf58247d96", 00:24:20.461 "assigned_rate_limits": { 00:24:20.461 "rw_ios_per_sec": 0, 00:24:20.461 "rw_mbytes_per_sec": 0, 00:24:20.461 "r_mbytes_per_sec": 0, 00:24:20.461 "w_mbytes_per_sec": 0 00:24:20.461 }, 00:24:20.461 "claimed": true, 00:24:20.461 "claim_type": "exclusive_write", 00:24:20.461 "zoned": false, 00:24:20.461 "supported_io_types": { 00:24:20.461 "read": true, 00:24:20.461 "write": true, 00:24:20.461 "unmap": true, 00:24:20.461 "flush": true, 00:24:20.461 "reset": true, 00:24:20.461 "nvme_admin": false, 00:24:20.461 "nvme_io": false, 00:24:20.461 "nvme_io_md": false, 00:24:20.461 "write_zeroes": true, 00:24:20.461 "zcopy": true, 00:24:20.461 "get_zone_info": false, 00:24:20.461 "zone_management": false, 00:24:20.461 "zone_append": false, 00:24:20.461 "compare": false, 00:24:20.461 "compare_and_write": false, 00:24:20.461 "abort": true, 00:24:20.461 "seek_hole": false, 00:24:20.461 "seek_data": false, 00:24:20.461 "copy": true, 00:24:20.461 "nvme_iov_md": false 00:24:20.461 }, 00:24:20.461 "memory_domains": [ 00:24:20.461 { 00:24:20.461 "dma_device_id": "system", 00:24:20.461 "dma_device_type": 1 00:24:20.461 }, 00:24:20.461 { 00:24:20.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.461 "dma_device_type": 2 00:24:20.461 } 00:24:20.461 ], 00:24:20.461 "driver_specific": {} 00:24:20.461 }' 00:24:20.461 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:20.721 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:20.721 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:20.721 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:20.721 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:20.721 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:20.721 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:20.721 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:20.721 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:20.721 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:20.980 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:20.980 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:20.980 11:06:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:21.548 [2024-07-25 11:06:28.387202] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:21.548 [2024-07-25 11:06:28.387237] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:21.548 [2024-07-25 11:06:28.387296] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.549 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.807 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:21.807 "name": "Existed_Raid", 00:24:21.807 "uuid": "046fafd7-7c66-4324-a192-9420485cbca0", 00:24:21.808 "strip_size_kb": 64, 00:24:21.808 "state": "offline", 00:24:21.808 "raid_level": "concat", 00:24:21.808 "superblock": true, 00:24:21.808 "num_base_bdevs": 4, 00:24:21.808 "num_base_bdevs_discovered": 3, 00:24:21.808 "num_base_bdevs_operational": 3, 00:24:21.808 "base_bdevs_list": [ 00:24:21.808 { 00:24:21.808 "name": null, 00:24:21.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.808 "is_configured": false, 00:24:21.808 "data_offset": 2048, 00:24:21.808 "data_size": 63488 00:24:21.808 }, 00:24:21.808 { 00:24:21.808 "name": "BaseBdev2", 00:24:21.808 "uuid": "1cc85f79-3df2-4365-a0cf-cb1e69f41573", 00:24:21.808 "is_configured": true, 00:24:21.808 "data_offset": 2048, 00:24:21.808 "data_size": 63488 00:24:21.808 }, 00:24:21.808 { 00:24:21.808 "name": "BaseBdev3", 00:24:21.808 "uuid": "bee8d838-ce26-4f56-a70f-851d14c24c7a", 00:24:21.808 "is_configured": true, 00:24:21.808 "data_offset": 2048, 00:24:21.808 "data_size": 63488 00:24:21.808 }, 00:24:21.808 { 00:24:21.808 "name": "BaseBdev4", 00:24:21.808 "uuid": "17db856d-4b89-4e41-a6ce-7fdf58247d96", 00:24:21.808 "is_configured": true, 00:24:21.808 "data_offset": 2048, 00:24:21.808 "data_size": 63488 00:24:21.808 } 00:24:21.808 ] 00:24:21.808 }' 00:24:21.808 11:06:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:21.808 11:06:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.375 11:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:22.375 11:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:22.375 11:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.375 11:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:22.375 11:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:22.375 11:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:22.375 11:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:22.633 [2024-07-25 11:06:29.696810] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:22.891 11:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:22.891 11:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:22.891 11:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.891 11:06:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:23.150 11:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:23.150 11:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:23.150 11:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:23.409 [2024-07-25 11:06:30.291408] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:23.409 11:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:23.409 11:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:23.409 11:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.409 11:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:23.668 11:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:23.668 11:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:23.668 11:06:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:23.927 [2024-07-25 11:06:30.879165] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:23.927 [2024-07-25 11:06:30.879217] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:24:23.927 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:23.927 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:23.927 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.927 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:24.186 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:24.186 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:24.186 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:24.186 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:24.186 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:24.186 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:24.445 BaseBdev2 00:24:24.445 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:24.445 11:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:24:24.445 11:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:24.445 11:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:24.445 11:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:24.445 11:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:24.445 11:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:24.704 11:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:24.963 [ 00:24:24.963 { 00:24:24.963 "name": "BaseBdev2", 00:24:24.963 "aliases": [ 00:24:24.963 "3254393e-84da-4d91-8e17-83a80002c3cf" 00:24:24.963 ], 00:24:24.963 "product_name": "Malloc disk", 00:24:24.963 "block_size": 512, 00:24:24.963 "num_blocks": 65536, 00:24:24.963 "uuid": "3254393e-84da-4d91-8e17-83a80002c3cf", 00:24:24.963 "assigned_rate_limits": { 00:24:24.963 "rw_ios_per_sec": 0, 00:24:24.963 "rw_mbytes_per_sec": 0, 00:24:24.963 "r_mbytes_per_sec": 0, 00:24:24.963 "w_mbytes_per_sec": 0 00:24:24.963 }, 00:24:24.963 "claimed": false, 00:24:24.963 "zoned": false, 00:24:24.963 "supported_io_types": { 00:24:24.963 "read": true, 00:24:24.963 "write": true, 00:24:24.963 "unmap": true, 00:24:24.963 "flush": true, 00:24:24.963 "reset": true, 00:24:24.963 "nvme_admin": false, 00:24:24.963 "nvme_io": false, 00:24:24.963 "nvme_io_md": false, 00:24:24.963 "write_zeroes": true, 00:24:24.963 "zcopy": true, 00:24:24.963 "get_zone_info": false, 00:24:24.963 "zone_management": false, 00:24:24.963 "zone_append": false, 00:24:24.963 "compare": false, 00:24:24.963 "compare_and_write": false, 00:24:24.963 "abort": true, 00:24:24.963 "seek_hole": false, 00:24:24.963 "seek_data": false, 00:24:24.963 "copy": true, 00:24:24.963 "nvme_iov_md": false 00:24:24.963 }, 00:24:24.963 "memory_domains": [ 00:24:24.963 { 00:24:24.963 "dma_device_id": "system", 00:24:24.963 "dma_device_type": 1 00:24:24.963 }, 00:24:24.963 { 00:24:24.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.963 "dma_device_type": 2 00:24:24.963 } 00:24:24.963 ], 00:24:24.963 "driver_specific": {} 00:24:24.963 } 00:24:24.963 ] 00:24:24.963 11:06:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:24.963 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:24.963 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:24.963 11:06:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:25.223 BaseBdev3 00:24:25.223 11:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:25.223 11:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:24:25.223 11:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:25.223 11:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:25.223 11:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:25.223 11:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:25.223 11:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:25.482 11:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:25.742 [ 00:24:25.742 { 00:24:25.742 "name": "BaseBdev3", 00:24:25.742 "aliases": [ 00:24:25.742 "b24868b1-ec7b-4602-a2c0-25645631f26b" 00:24:25.742 ], 00:24:25.742 "product_name": "Malloc disk", 00:24:25.742 "block_size": 512, 00:24:25.742 "num_blocks": 65536, 00:24:25.742 "uuid": "b24868b1-ec7b-4602-a2c0-25645631f26b", 00:24:25.742 "assigned_rate_limits": { 00:24:25.742 "rw_ios_per_sec": 0, 00:24:25.742 "rw_mbytes_per_sec": 0, 00:24:25.742 "r_mbytes_per_sec": 0, 00:24:25.742 "w_mbytes_per_sec": 0 00:24:25.742 }, 00:24:25.742 "claimed": false, 00:24:25.742 "zoned": false, 00:24:25.742 "supported_io_types": { 00:24:25.742 "read": true, 00:24:25.742 "write": true, 00:24:25.742 "unmap": true, 00:24:25.742 "flush": true, 00:24:25.742 "reset": true, 00:24:25.742 "nvme_admin": false, 00:24:25.742 "nvme_io": false, 00:24:25.742 "nvme_io_md": false, 00:24:25.742 "write_zeroes": true, 00:24:25.742 "zcopy": true, 00:24:25.742 "get_zone_info": false, 00:24:25.742 "zone_management": false, 00:24:25.742 "zone_append": false, 00:24:25.742 "compare": false, 00:24:25.742 "compare_and_write": false, 00:24:25.742 "abort": true, 00:24:25.742 "seek_hole": false, 00:24:25.742 "seek_data": false, 00:24:25.742 "copy": true, 00:24:25.742 "nvme_iov_md": false 00:24:25.742 }, 00:24:25.742 "memory_domains": [ 00:24:25.742 { 00:24:25.742 "dma_device_id": "system", 00:24:25.742 "dma_device_type": 1 00:24:25.742 }, 00:24:25.742 { 00:24:25.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:25.742 "dma_device_type": 2 00:24:25.742 } 00:24:25.742 ], 00:24:25.742 "driver_specific": {} 00:24:25.742 } 00:24:25.742 ] 00:24:25.742 11:06:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:25.742 11:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:25.742 11:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:25.742 11:06:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:26.001 BaseBdev4 00:24:26.001 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:26.001 11:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:24:26.001 11:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:26.001 11:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:26.001 11:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:26.001 11:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:26.001 11:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:26.260 11:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:26.518 [ 00:24:26.518 { 00:24:26.518 "name": "BaseBdev4", 00:24:26.518 "aliases": [ 00:24:26.518 "0919d695-c2b8-4db7-8c15-89cfe1428c92" 00:24:26.518 ], 00:24:26.518 "product_name": "Malloc disk", 00:24:26.518 "block_size": 512, 00:24:26.518 "num_blocks": 65536, 00:24:26.518 "uuid": "0919d695-c2b8-4db7-8c15-89cfe1428c92", 00:24:26.518 "assigned_rate_limits": { 00:24:26.518 "rw_ios_per_sec": 0, 00:24:26.518 "rw_mbytes_per_sec": 0, 00:24:26.518 "r_mbytes_per_sec": 0, 00:24:26.518 "w_mbytes_per_sec": 0 00:24:26.518 }, 00:24:26.518 "claimed": false, 00:24:26.518 "zoned": false, 00:24:26.518 "supported_io_types": { 00:24:26.518 "read": true, 00:24:26.518 "write": true, 00:24:26.518 "unmap": true, 00:24:26.518 "flush": true, 00:24:26.518 "reset": true, 00:24:26.518 "nvme_admin": false, 00:24:26.518 "nvme_io": false, 00:24:26.518 "nvme_io_md": false, 00:24:26.518 "write_zeroes": true, 00:24:26.518 "zcopy": true, 00:24:26.518 "get_zone_info": false, 00:24:26.518 "zone_management": false, 00:24:26.518 "zone_append": false, 00:24:26.518 "compare": false, 00:24:26.518 "compare_and_write": false, 00:24:26.518 "abort": true, 00:24:26.518 "seek_hole": false, 00:24:26.518 "seek_data": false, 00:24:26.518 "copy": true, 00:24:26.518 "nvme_iov_md": false 00:24:26.518 }, 00:24:26.518 "memory_domains": [ 00:24:26.518 { 00:24:26.518 "dma_device_id": "system", 00:24:26.518 "dma_device_type": 1 00:24:26.518 }, 00:24:26.518 { 00:24:26.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:26.518 "dma_device_type": 2 00:24:26.518 } 00:24:26.518 ], 00:24:26.518 "driver_specific": {} 00:24:26.518 } 00:24:26.518 ] 00:24:26.518 11:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:26.518 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:26.518 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:26.519 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:26.778 [2024-07-25 11:06:33.657240] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:26.778 [2024-07-25 11:06:33.657286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:26.778 [2024-07-25 11:06:33.657319] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:26.778 [2024-07-25 11:06:33.659638] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:26.778 [2024-07-25 11:06:33.659699] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:26.778 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:26.778 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:26.778 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:26.778 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:26.778 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:26.778 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:26.778 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:26.778 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:26.778 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:26.778 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:26.778 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.778 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.037 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.037 "name": "Existed_Raid", 00:24:27.037 "uuid": "3df2b8e0-dd04-4d2f-a9d2-f2e13d0b514e", 00:24:27.037 "strip_size_kb": 64, 00:24:27.037 "state": "configuring", 00:24:27.037 "raid_level": "concat", 00:24:27.037 "superblock": true, 00:24:27.037 "num_base_bdevs": 4, 00:24:27.037 "num_base_bdevs_discovered": 3, 00:24:27.037 "num_base_bdevs_operational": 4, 00:24:27.037 "base_bdevs_list": [ 00:24:27.037 { 00:24:27.037 "name": "BaseBdev1", 00:24:27.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.037 "is_configured": false, 00:24:27.037 "data_offset": 0, 00:24:27.037 "data_size": 0 00:24:27.037 }, 00:24:27.037 { 00:24:27.037 "name": "BaseBdev2", 00:24:27.037 "uuid": "3254393e-84da-4d91-8e17-83a80002c3cf", 00:24:27.037 "is_configured": true, 00:24:27.037 "data_offset": 2048, 00:24:27.037 "data_size": 63488 00:24:27.037 }, 00:24:27.037 { 00:24:27.037 "name": "BaseBdev3", 00:24:27.037 "uuid": "b24868b1-ec7b-4602-a2c0-25645631f26b", 00:24:27.037 "is_configured": true, 00:24:27.037 "data_offset": 2048, 00:24:27.037 "data_size": 63488 00:24:27.037 }, 00:24:27.037 { 00:24:27.037 "name": "BaseBdev4", 00:24:27.037 "uuid": "0919d695-c2b8-4db7-8c15-89cfe1428c92", 00:24:27.037 "is_configured": true, 00:24:27.037 "data_offset": 2048, 00:24:27.037 "data_size": 63488 00:24:27.037 } 00:24:27.037 ] 00:24:27.037 }' 00:24:27.037 11:06:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.037 11:06:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:27.605 [2024-07-25 11:06:34.683969] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.605 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:27.896 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:27.897 "name": "Existed_Raid", 00:24:27.897 "uuid": "3df2b8e0-dd04-4d2f-a9d2-f2e13d0b514e", 00:24:27.897 "strip_size_kb": 64, 00:24:27.897 "state": "configuring", 00:24:27.897 "raid_level": "concat", 00:24:27.897 "superblock": true, 00:24:27.897 "num_base_bdevs": 4, 00:24:27.897 "num_base_bdevs_discovered": 2, 00:24:27.897 "num_base_bdevs_operational": 4, 00:24:27.897 "base_bdevs_list": [ 00:24:27.897 { 00:24:27.897 "name": "BaseBdev1", 00:24:27.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:27.897 "is_configured": false, 00:24:27.897 "data_offset": 0, 00:24:27.897 "data_size": 0 00:24:27.897 }, 00:24:27.897 { 00:24:27.897 "name": null, 00:24:27.897 "uuid": "3254393e-84da-4d91-8e17-83a80002c3cf", 00:24:27.897 "is_configured": false, 00:24:27.897 "data_offset": 2048, 00:24:27.897 "data_size": 63488 00:24:27.897 }, 00:24:27.897 { 00:24:27.897 "name": "BaseBdev3", 00:24:27.897 "uuid": "b24868b1-ec7b-4602-a2c0-25645631f26b", 00:24:27.897 "is_configured": true, 00:24:27.897 "data_offset": 2048, 00:24:27.897 "data_size": 63488 00:24:27.897 }, 00:24:27.897 { 00:24:27.897 "name": "BaseBdev4", 00:24:27.897 "uuid": "0919d695-c2b8-4db7-8c15-89cfe1428c92", 00:24:27.897 "is_configured": true, 00:24:27.897 "data_offset": 2048, 00:24:27.897 "data_size": 63488 00:24:27.897 } 00:24:27.897 ] 00:24:27.897 }' 00:24:27.897 11:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:27.897 11:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:28.476 11:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.476 11:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:28.757 11:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:28.757 11:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:29.016 [2024-07-25 11:06:35.990623] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:29.016 BaseBdev1 00:24:29.016 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:29.016 11:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:24:29.016 11:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:29.016 11:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:29.016 11:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:29.016 11:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:29.016 11:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:29.275 11:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:29.534 [ 00:24:29.534 { 00:24:29.534 "name": "BaseBdev1", 00:24:29.534 "aliases": [ 00:24:29.534 "5853b39e-656e-404f-8fe1-3bfebb11b374" 00:24:29.534 ], 00:24:29.534 "product_name": "Malloc disk", 00:24:29.534 "block_size": 512, 00:24:29.534 "num_blocks": 65536, 00:24:29.534 "uuid": "5853b39e-656e-404f-8fe1-3bfebb11b374", 00:24:29.534 "assigned_rate_limits": { 00:24:29.534 "rw_ios_per_sec": 0, 00:24:29.534 "rw_mbytes_per_sec": 0, 00:24:29.534 "r_mbytes_per_sec": 0, 00:24:29.534 "w_mbytes_per_sec": 0 00:24:29.534 }, 00:24:29.534 "claimed": true, 00:24:29.534 "claim_type": "exclusive_write", 00:24:29.534 "zoned": false, 00:24:29.534 "supported_io_types": { 00:24:29.534 "read": true, 00:24:29.534 "write": true, 00:24:29.534 "unmap": true, 00:24:29.534 "flush": true, 00:24:29.534 "reset": true, 00:24:29.534 "nvme_admin": false, 00:24:29.534 "nvme_io": false, 00:24:29.534 "nvme_io_md": false, 00:24:29.534 "write_zeroes": true, 00:24:29.534 "zcopy": true, 00:24:29.534 "get_zone_info": false, 00:24:29.534 "zone_management": false, 00:24:29.534 "zone_append": false, 00:24:29.534 "compare": false, 00:24:29.534 "compare_and_write": false, 00:24:29.534 "abort": true, 00:24:29.534 "seek_hole": false, 00:24:29.534 "seek_data": false, 00:24:29.534 "copy": true, 00:24:29.534 "nvme_iov_md": false 00:24:29.534 }, 00:24:29.534 "memory_domains": [ 00:24:29.534 { 00:24:29.534 "dma_device_id": "system", 00:24:29.534 "dma_device_type": 1 00:24:29.534 }, 00:24:29.534 { 00:24:29.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.534 "dma_device_type": 2 00:24:29.534 } 00:24:29.534 ], 00:24:29.534 "driver_specific": {} 00:24:29.534 } 00:24:29.534 ] 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.534 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:29.793 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:29.793 "name": "Existed_Raid", 00:24:29.793 "uuid": "3df2b8e0-dd04-4d2f-a9d2-f2e13d0b514e", 00:24:29.793 "strip_size_kb": 64, 00:24:29.793 "state": "configuring", 00:24:29.793 "raid_level": "concat", 00:24:29.793 "superblock": true, 00:24:29.793 "num_base_bdevs": 4, 00:24:29.793 "num_base_bdevs_discovered": 3, 00:24:29.793 "num_base_bdevs_operational": 4, 00:24:29.793 "base_bdevs_list": [ 00:24:29.793 { 00:24:29.793 "name": "BaseBdev1", 00:24:29.793 "uuid": "5853b39e-656e-404f-8fe1-3bfebb11b374", 00:24:29.793 "is_configured": true, 00:24:29.793 "data_offset": 2048, 00:24:29.793 "data_size": 63488 00:24:29.793 }, 00:24:29.793 { 00:24:29.793 "name": null, 00:24:29.793 "uuid": "3254393e-84da-4d91-8e17-83a80002c3cf", 00:24:29.793 "is_configured": false, 00:24:29.793 "data_offset": 2048, 00:24:29.793 "data_size": 63488 00:24:29.793 }, 00:24:29.793 { 00:24:29.793 "name": "BaseBdev3", 00:24:29.793 "uuid": "b24868b1-ec7b-4602-a2c0-25645631f26b", 00:24:29.793 "is_configured": true, 00:24:29.793 "data_offset": 2048, 00:24:29.793 "data_size": 63488 00:24:29.793 }, 00:24:29.793 { 00:24:29.793 "name": "BaseBdev4", 00:24:29.793 "uuid": "0919d695-c2b8-4db7-8c15-89cfe1428c92", 00:24:29.793 "is_configured": true, 00:24:29.793 "data_offset": 2048, 00:24:29.793 "data_size": 63488 00:24:29.793 } 00:24:29.793 ] 00:24:29.793 }' 00:24:29.793 11:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:29.793 11:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:30.362 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:30.362 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.362 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:30.362 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:30.621 [2024-07-25 11:06:37.671277] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:30.621 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:30.621 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:30.621 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:30.621 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:30.621 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:30.621 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:30.621 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:30.621 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:30.621 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:30.621 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:30.621 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.621 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:30.881 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:30.881 "name": "Existed_Raid", 00:24:30.881 "uuid": "3df2b8e0-dd04-4d2f-a9d2-f2e13d0b514e", 00:24:30.881 "strip_size_kb": 64, 00:24:30.881 "state": "configuring", 00:24:30.881 "raid_level": "concat", 00:24:30.881 "superblock": true, 00:24:30.881 "num_base_bdevs": 4, 00:24:30.881 "num_base_bdevs_discovered": 2, 00:24:30.881 "num_base_bdevs_operational": 4, 00:24:30.881 "base_bdevs_list": [ 00:24:30.881 { 00:24:30.881 "name": "BaseBdev1", 00:24:30.881 "uuid": "5853b39e-656e-404f-8fe1-3bfebb11b374", 00:24:30.881 "is_configured": true, 00:24:30.881 "data_offset": 2048, 00:24:30.881 "data_size": 63488 00:24:30.881 }, 00:24:30.881 { 00:24:30.881 "name": null, 00:24:30.881 "uuid": "3254393e-84da-4d91-8e17-83a80002c3cf", 00:24:30.881 "is_configured": false, 00:24:30.881 "data_offset": 2048, 00:24:30.881 "data_size": 63488 00:24:30.881 }, 00:24:30.881 { 00:24:30.881 "name": null, 00:24:30.881 "uuid": "b24868b1-ec7b-4602-a2c0-25645631f26b", 00:24:30.881 "is_configured": false, 00:24:30.881 "data_offset": 2048, 00:24:30.881 "data_size": 63488 00:24:30.881 }, 00:24:30.881 { 00:24:30.881 "name": "BaseBdev4", 00:24:30.881 "uuid": "0919d695-c2b8-4db7-8c15-89cfe1428c92", 00:24:30.881 "is_configured": true, 00:24:30.881 "data_offset": 2048, 00:24:30.881 "data_size": 63488 00:24:30.881 } 00:24:30.881 ] 00:24:30.881 }' 00:24:30.881 11:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:30.881 11:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:31.449 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.449 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:31.709 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:31.709 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:31.969 [2024-07-25 11:06:38.942792] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:31.969 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:31.969 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:31.969 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:31.969 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:31.969 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:31.969 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:31.969 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:31.969 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:31.969 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:31.969 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:31.969 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.969 11:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.228 11:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:32.228 "name": "Existed_Raid", 00:24:32.228 "uuid": "3df2b8e0-dd04-4d2f-a9d2-f2e13d0b514e", 00:24:32.228 "strip_size_kb": 64, 00:24:32.228 "state": "configuring", 00:24:32.228 "raid_level": "concat", 00:24:32.228 "superblock": true, 00:24:32.228 "num_base_bdevs": 4, 00:24:32.228 "num_base_bdevs_discovered": 3, 00:24:32.228 "num_base_bdevs_operational": 4, 00:24:32.228 "base_bdevs_list": [ 00:24:32.228 { 00:24:32.228 "name": "BaseBdev1", 00:24:32.228 "uuid": "5853b39e-656e-404f-8fe1-3bfebb11b374", 00:24:32.228 "is_configured": true, 00:24:32.228 "data_offset": 2048, 00:24:32.228 "data_size": 63488 00:24:32.228 }, 00:24:32.228 { 00:24:32.228 "name": null, 00:24:32.228 "uuid": "3254393e-84da-4d91-8e17-83a80002c3cf", 00:24:32.228 "is_configured": false, 00:24:32.228 "data_offset": 2048, 00:24:32.228 "data_size": 63488 00:24:32.228 }, 00:24:32.228 { 00:24:32.228 "name": "BaseBdev3", 00:24:32.228 "uuid": "b24868b1-ec7b-4602-a2c0-25645631f26b", 00:24:32.228 "is_configured": true, 00:24:32.228 "data_offset": 2048, 00:24:32.228 "data_size": 63488 00:24:32.228 }, 00:24:32.228 { 00:24:32.228 "name": "BaseBdev4", 00:24:32.228 "uuid": "0919d695-c2b8-4db7-8c15-89cfe1428c92", 00:24:32.228 "is_configured": true, 00:24:32.228 "data_offset": 2048, 00:24:32.228 "data_size": 63488 00:24:32.228 } 00:24:32.228 ] 00:24:32.228 }' 00:24:32.228 11:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:32.228 11:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.796 11:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.796 11:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:33.055 11:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:33.055 11:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:33.315 [2024-07-25 11:06:40.202254] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:33.315 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:33.315 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:33.315 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:33.315 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:33.315 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:33.315 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:33.315 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:33.315 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:33.315 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:33.315 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:33.315 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.315 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.575 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:33.575 "name": "Existed_Raid", 00:24:33.575 "uuid": "3df2b8e0-dd04-4d2f-a9d2-f2e13d0b514e", 00:24:33.575 "strip_size_kb": 64, 00:24:33.575 "state": "configuring", 00:24:33.575 "raid_level": "concat", 00:24:33.575 "superblock": true, 00:24:33.575 "num_base_bdevs": 4, 00:24:33.575 "num_base_bdevs_discovered": 2, 00:24:33.575 "num_base_bdevs_operational": 4, 00:24:33.575 "base_bdevs_list": [ 00:24:33.575 { 00:24:33.575 "name": null, 00:24:33.575 "uuid": "5853b39e-656e-404f-8fe1-3bfebb11b374", 00:24:33.575 "is_configured": false, 00:24:33.575 "data_offset": 2048, 00:24:33.575 "data_size": 63488 00:24:33.575 }, 00:24:33.575 { 00:24:33.575 "name": null, 00:24:33.575 "uuid": "3254393e-84da-4d91-8e17-83a80002c3cf", 00:24:33.575 "is_configured": false, 00:24:33.575 "data_offset": 2048, 00:24:33.575 "data_size": 63488 00:24:33.575 }, 00:24:33.575 { 00:24:33.575 "name": "BaseBdev3", 00:24:33.575 "uuid": "b24868b1-ec7b-4602-a2c0-25645631f26b", 00:24:33.575 "is_configured": true, 00:24:33.575 "data_offset": 2048, 00:24:33.575 "data_size": 63488 00:24:33.575 }, 00:24:33.575 { 00:24:33.575 "name": "BaseBdev4", 00:24:33.575 "uuid": "0919d695-c2b8-4db7-8c15-89cfe1428c92", 00:24:33.575 "is_configured": true, 00:24:33.575 "data_offset": 2048, 00:24:33.575 "data_size": 63488 00:24:33.575 } 00:24:33.575 ] 00:24:33.575 }' 00:24:33.575 11:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:33.575 11:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:34.144 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.144 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:34.404 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:34.404 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:34.973 [2024-07-25 11:06:41.871710] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:34.973 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:34.973 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:34.973 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:34.973 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:34.973 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:34.973 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:34.973 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:34.973 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:34.973 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:34.973 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:34.973 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.973 11:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:35.232 11:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:35.232 "name": "Existed_Raid", 00:24:35.232 "uuid": "3df2b8e0-dd04-4d2f-a9d2-f2e13d0b514e", 00:24:35.232 "strip_size_kb": 64, 00:24:35.232 "state": "configuring", 00:24:35.232 "raid_level": "concat", 00:24:35.232 "superblock": true, 00:24:35.232 "num_base_bdevs": 4, 00:24:35.232 "num_base_bdevs_discovered": 3, 00:24:35.232 "num_base_bdevs_operational": 4, 00:24:35.232 "base_bdevs_list": [ 00:24:35.232 { 00:24:35.232 "name": null, 00:24:35.232 "uuid": "5853b39e-656e-404f-8fe1-3bfebb11b374", 00:24:35.232 "is_configured": false, 00:24:35.232 "data_offset": 2048, 00:24:35.232 "data_size": 63488 00:24:35.232 }, 00:24:35.232 { 00:24:35.232 "name": "BaseBdev2", 00:24:35.232 "uuid": "3254393e-84da-4d91-8e17-83a80002c3cf", 00:24:35.232 "is_configured": true, 00:24:35.232 "data_offset": 2048, 00:24:35.232 "data_size": 63488 00:24:35.232 }, 00:24:35.232 { 00:24:35.232 "name": "BaseBdev3", 00:24:35.232 "uuid": "b24868b1-ec7b-4602-a2c0-25645631f26b", 00:24:35.232 "is_configured": true, 00:24:35.232 "data_offset": 2048, 00:24:35.232 "data_size": 63488 00:24:35.232 }, 00:24:35.232 { 00:24:35.232 "name": "BaseBdev4", 00:24:35.232 "uuid": "0919d695-c2b8-4db7-8c15-89cfe1428c92", 00:24:35.232 "is_configured": true, 00:24:35.232 "data_offset": 2048, 00:24:35.232 "data_size": 63488 00:24:35.232 } 00:24:35.232 ] 00:24:35.232 }' 00:24:35.232 11:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:35.232 11:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.801 11:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.801 11:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:36.060 11:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:36.060 11:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.060 11:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:36.060 11:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 5853b39e-656e-404f-8fe1-3bfebb11b374 00:24:36.321 [2024-07-25 11:06:43.403333] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:36.321 [2024-07-25 11:06:43.403594] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:24:36.321 [2024-07-25 11:06:43.403614] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:36.321 [2024-07-25 11:06:43.403929] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:24:36.321 [2024-07-25 11:06:43.404146] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:24:36.321 [2024-07-25 11:06:43.404168] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:24:36.321 NewBaseBdev 00:24:36.321 [2024-07-25 11:06:43.404350] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:36.321 11:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:36.321 11:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:24:36.321 11:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:24:36.321 11:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:24:36.321 11:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:24:36.321 11:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:24:36.321 11:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:36.888 11:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:37.147 [ 00:24:37.147 { 00:24:37.148 "name": "NewBaseBdev", 00:24:37.148 "aliases": [ 00:24:37.148 "5853b39e-656e-404f-8fe1-3bfebb11b374" 00:24:37.148 ], 00:24:37.148 "product_name": "Malloc disk", 00:24:37.148 "block_size": 512, 00:24:37.148 "num_blocks": 65536, 00:24:37.148 "uuid": "5853b39e-656e-404f-8fe1-3bfebb11b374", 00:24:37.148 "assigned_rate_limits": { 00:24:37.148 "rw_ios_per_sec": 0, 00:24:37.148 "rw_mbytes_per_sec": 0, 00:24:37.148 "r_mbytes_per_sec": 0, 00:24:37.148 "w_mbytes_per_sec": 0 00:24:37.148 }, 00:24:37.148 "claimed": true, 00:24:37.148 "claim_type": "exclusive_write", 00:24:37.148 "zoned": false, 00:24:37.148 "supported_io_types": { 00:24:37.148 "read": true, 00:24:37.148 "write": true, 00:24:37.148 "unmap": true, 00:24:37.148 "flush": true, 00:24:37.148 "reset": true, 00:24:37.148 "nvme_admin": false, 00:24:37.148 "nvme_io": false, 00:24:37.148 "nvme_io_md": false, 00:24:37.148 "write_zeroes": true, 00:24:37.148 "zcopy": true, 00:24:37.148 "get_zone_info": false, 00:24:37.148 "zone_management": false, 00:24:37.148 "zone_append": false, 00:24:37.148 "compare": false, 00:24:37.148 "compare_and_write": false, 00:24:37.148 "abort": true, 00:24:37.148 "seek_hole": false, 00:24:37.148 "seek_data": false, 00:24:37.148 "copy": true, 00:24:37.148 "nvme_iov_md": false 00:24:37.148 }, 00:24:37.148 "memory_domains": [ 00:24:37.148 { 00:24:37.148 "dma_device_id": "system", 00:24:37.148 "dma_device_type": 1 00:24:37.148 }, 00:24:37.148 { 00:24:37.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.148 "dma_device_type": 2 00:24:37.148 } 00:24:37.148 ], 00:24:37.148 "driver_specific": {} 00:24:37.148 } 00:24:37.148 ] 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:37.148 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.407 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:37.407 "name": "Existed_Raid", 00:24:37.407 "uuid": "3df2b8e0-dd04-4d2f-a9d2-f2e13d0b514e", 00:24:37.407 "strip_size_kb": 64, 00:24:37.407 "state": "online", 00:24:37.407 "raid_level": "concat", 00:24:37.407 "superblock": true, 00:24:37.407 "num_base_bdevs": 4, 00:24:37.407 "num_base_bdevs_discovered": 4, 00:24:37.407 "num_base_bdevs_operational": 4, 00:24:37.407 "base_bdevs_list": [ 00:24:37.407 { 00:24:37.407 "name": "NewBaseBdev", 00:24:37.407 "uuid": "5853b39e-656e-404f-8fe1-3bfebb11b374", 00:24:37.407 "is_configured": true, 00:24:37.407 "data_offset": 2048, 00:24:37.407 "data_size": 63488 00:24:37.407 }, 00:24:37.407 { 00:24:37.407 "name": "BaseBdev2", 00:24:37.407 "uuid": "3254393e-84da-4d91-8e17-83a80002c3cf", 00:24:37.407 "is_configured": true, 00:24:37.407 "data_offset": 2048, 00:24:37.407 "data_size": 63488 00:24:37.407 }, 00:24:37.407 { 00:24:37.407 "name": "BaseBdev3", 00:24:37.407 "uuid": "b24868b1-ec7b-4602-a2c0-25645631f26b", 00:24:37.407 "is_configured": true, 00:24:37.407 "data_offset": 2048, 00:24:37.407 "data_size": 63488 00:24:37.407 }, 00:24:37.407 { 00:24:37.407 "name": "BaseBdev4", 00:24:37.408 "uuid": "0919d695-c2b8-4db7-8c15-89cfe1428c92", 00:24:37.408 "is_configured": true, 00:24:37.408 "data_offset": 2048, 00:24:37.408 "data_size": 63488 00:24:37.408 } 00:24:37.408 ] 00:24:37.408 }' 00:24:37.408 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:37.408 11:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:37.976 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:37.976 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:37.976 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:37.976 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:37.976 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:37.976 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:37.976 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:37.976 11:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:38.235 [2024-07-25 11:06:45.172706] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:38.235 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:38.235 "name": "Existed_Raid", 00:24:38.235 "aliases": [ 00:24:38.235 "3df2b8e0-dd04-4d2f-a9d2-f2e13d0b514e" 00:24:38.235 ], 00:24:38.235 "product_name": "Raid Volume", 00:24:38.235 "block_size": 512, 00:24:38.235 "num_blocks": 253952, 00:24:38.235 "uuid": "3df2b8e0-dd04-4d2f-a9d2-f2e13d0b514e", 00:24:38.235 "assigned_rate_limits": { 00:24:38.235 "rw_ios_per_sec": 0, 00:24:38.235 "rw_mbytes_per_sec": 0, 00:24:38.235 "r_mbytes_per_sec": 0, 00:24:38.235 "w_mbytes_per_sec": 0 00:24:38.235 }, 00:24:38.235 "claimed": false, 00:24:38.235 "zoned": false, 00:24:38.235 "supported_io_types": { 00:24:38.235 "read": true, 00:24:38.235 "write": true, 00:24:38.235 "unmap": true, 00:24:38.235 "flush": true, 00:24:38.235 "reset": true, 00:24:38.235 "nvme_admin": false, 00:24:38.235 "nvme_io": false, 00:24:38.235 "nvme_io_md": false, 00:24:38.235 "write_zeroes": true, 00:24:38.235 "zcopy": false, 00:24:38.235 "get_zone_info": false, 00:24:38.235 "zone_management": false, 00:24:38.236 "zone_append": false, 00:24:38.236 "compare": false, 00:24:38.236 "compare_and_write": false, 00:24:38.236 "abort": false, 00:24:38.236 "seek_hole": false, 00:24:38.236 "seek_data": false, 00:24:38.236 "copy": false, 00:24:38.236 "nvme_iov_md": false 00:24:38.236 }, 00:24:38.236 "memory_domains": [ 00:24:38.236 { 00:24:38.236 "dma_device_id": "system", 00:24:38.236 "dma_device_type": 1 00:24:38.236 }, 00:24:38.236 { 00:24:38.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.236 "dma_device_type": 2 00:24:38.236 }, 00:24:38.236 { 00:24:38.236 "dma_device_id": "system", 00:24:38.236 "dma_device_type": 1 00:24:38.236 }, 00:24:38.236 { 00:24:38.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.236 "dma_device_type": 2 00:24:38.236 }, 00:24:38.236 { 00:24:38.236 "dma_device_id": "system", 00:24:38.236 "dma_device_type": 1 00:24:38.236 }, 00:24:38.236 { 00:24:38.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.236 "dma_device_type": 2 00:24:38.236 }, 00:24:38.236 { 00:24:38.236 "dma_device_id": "system", 00:24:38.236 "dma_device_type": 1 00:24:38.236 }, 00:24:38.236 { 00:24:38.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.236 "dma_device_type": 2 00:24:38.236 } 00:24:38.236 ], 00:24:38.236 "driver_specific": { 00:24:38.236 "raid": { 00:24:38.236 "uuid": "3df2b8e0-dd04-4d2f-a9d2-f2e13d0b514e", 00:24:38.236 "strip_size_kb": 64, 00:24:38.236 "state": "online", 00:24:38.236 "raid_level": "concat", 00:24:38.236 "superblock": true, 00:24:38.236 "num_base_bdevs": 4, 00:24:38.236 "num_base_bdevs_discovered": 4, 00:24:38.236 "num_base_bdevs_operational": 4, 00:24:38.236 "base_bdevs_list": [ 00:24:38.236 { 00:24:38.236 "name": "NewBaseBdev", 00:24:38.236 "uuid": "5853b39e-656e-404f-8fe1-3bfebb11b374", 00:24:38.236 "is_configured": true, 00:24:38.236 "data_offset": 2048, 00:24:38.236 "data_size": 63488 00:24:38.236 }, 00:24:38.236 { 00:24:38.236 "name": "BaseBdev2", 00:24:38.236 "uuid": "3254393e-84da-4d91-8e17-83a80002c3cf", 00:24:38.236 "is_configured": true, 00:24:38.236 "data_offset": 2048, 00:24:38.236 "data_size": 63488 00:24:38.236 }, 00:24:38.236 { 00:24:38.236 "name": "BaseBdev3", 00:24:38.236 "uuid": "b24868b1-ec7b-4602-a2c0-25645631f26b", 00:24:38.236 "is_configured": true, 00:24:38.236 "data_offset": 2048, 00:24:38.236 "data_size": 63488 00:24:38.236 }, 00:24:38.236 { 00:24:38.236 "name": "BaseBdev4", 00:24:38.236 "uuid": "0919d695-c2b8-4db7-8c15-89cfe1428c92", 00:24:38.236 "is_configured": true, 00:24:38.236 "data_offset": 2048, 00:24:38.236 "data_size": 63488 00:24:38.236 } 00:24:38.236 ] 00:24:38.236 } 00:24:38.236 } 00:24:38.236 }' 00:24:38.236 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:38.236 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:38.236 BaseBdev2 00:24:38.236 BaseBdev3 00:24:38.236 BaseBdev4' 00:24:38.236 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:38.236 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:38.236 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:38.495 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:38.496 "name": "NewBaseBdev", 00:24:38.496 "aliases": [ 00:24:38.496 "5853b39e-656e-404f-8fe1-3bfebb11b374" 00:24:38.496 ], 00:24:38.496 "product_name": "Malloc disk", 00:24:38.496 "block_size": 512, 00:24:38.496 "num_blocks": 65536, 00:24:38.496 "uuid": "5853b39e-656e-404f-8fe1-3bfebb11b374", 00:24:38.496 "assigned_rate_limits": { 00:24:38.496 "rw_ios_per_sec": 0, 00:24:38.496 "rw_mbytes_per_sec": 0, 00:24:38.496 "r_mbytes_per_sec": 0, 00:24:38.496 "w_mbytes_per_sec": 0 00:24:38.496 }, 00:24:38.496 "claimed": true, 00:24:38.496 "claim_type": "exclusive_write", 00:24:38.496 "zoned": false, 00:24:38.496 "supported_io_types": { 00:24:38.496 "read": true, 00:24:38.496 "write": true, 00:24:38.496 "unmap": true, 00:24:38.496 "flush": true, 00:24:38.496 "reset": true, 00:24:38.496 "nvme_admin": false, 00:24:38.496 "nvme_io": false, 00:24:38.496 "nvme_io_md": false, 00:24:38.496 "write_zeroes": true, 00:24:38.496 "zcopy": true, 00:24:38.496 "get_zone_info": false, 00:24:38.496 "zone_management": false, 00:24:38.496 "zone_append": false, 00:24:38.496 "compare": false, 00:24:38.496 "compare_and_write": false, 00:24:38.496 "abort": true, 00:24:38.496 "seek_hole": false, 00:24:38.496 "seek_data": false, 00:24:38.496 "copy": true, 00:24:38.496 "nvme_iov_md": false 00:24:38.496 }, 00:24:38.496 "memory_domains": [ 00:24:38.496 { 00:24:38.496 "dma_device_id": "system", 00:24:38.496 "dma_device_type": 1 00:24:38.496 }, 00:24:38.496 { 00:24:38.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.496 "dma_device_type": 2 00:24:38.496 } 00:24:38.496 ], 00:24:38.496 "driver_specific": {} 00:24:38.496 }' 00:24:38.496 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:38.496 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:38.496 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:38.496 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:38.496 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:38.754 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:38.754 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.754 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.754 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:38.754 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.754 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.754 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:38.754 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:38.754 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:38.754 11:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:39.013 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:39.013 "name": "BaseBdev2", 00:24:39.013 "aliases": [ 00:24:39.013 "3254393e-84da-4d91-8e17-83a80002c3cf" 00:24:39.013 ], 00:24:39.013 "product_name": "Malloc disk", 00:24:39.013 "block_size": 512, 00:24:39.013 "num_blocks": 65536, 00:24:39.013 "uuid": "3254393e-84da-4d91-8e17-83a80002c3cf", 00:24:39.013 "assigned_rate_limits": { 00:24:39.013 "rw_ios_per_sec": 0, 00:24:39.013 "rw_mbytes_per_sec": 0, 00:24:39.013 "r_mbytes_per_sec": 0, 00:24:39.013 "w_mbytes_per_sec": 0 00:24:39.013 }, 00:24:39.013 "claimed": true, 00:24:39.013 "claim_type": "exclusive_write", 00:24:39.013 "zoned": false, 00:24:39.013 "supported_io_types": { 00:24:39.013 "read": true, 00:24:39.013 "write": true, 00:24:39.013 "unmap": true, 00:24:39.013 "flush": true, 00:24:39.013 "reset": true, 00:24:39.013 "nvme_admin": false, 00:24:39.013 "nvme_io": false, 00:24:39.013 "nvme_io_md": false, 00:24:39.013 "write_zeroes": true, 00:24:39.013 "zcopy": true, 00:24:39.013 "get_zone_info": false, 00:24:39.013 "zone_management": false, 00:24:39.013 "zone_append": false, 00:24:39.013 "compare": false, 00:24:39.014 "compare_and_write": false, 00:24:39.014 "abort": true, 00:24:39.014 "seek_hole": false, 00:24:39.014 "seek_data": false, 00:24:39.014 "copy": true, 00:24:39.014 "nvme_iov_md": false 00:24:39.014 }, 00:24:39.014 "memory_domains": [ 00:24:39.014 { 00:24:39.014 "dma_device_id": "system", 00:24:39.014 "dma_device_type": 1 00:24:39.014 }, 00:24:39.014 { 00:24:39.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.014 "dma_device_type": 2 00:24:39.014 } 00:24:39.014 ], 00:24:39.014 "driver_specific": {} 00:24:39.014 }' 00:24:39.014 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.014 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.014 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:39.014 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.273 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.273 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:39.273 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.273 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.273 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:39.273 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.273 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:39.273 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:39.273 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:39.273 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:39.273 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:39.533 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:39.533 "name": "BaseBdev3", 00:24:39.533 "aliases": [ 00:24:39.533 "b24868b1-ec7b-4602-a2c0-25645631f26b" 00:24:39.533 ], 00:24:39.533 "product_name": "Malloc disk", 00:24:39.533 "block_size": 512, 00:24:39.533 "num_blocks": 65536, 00:24:39.533 "uuid": "b24868b1-ec7b-4602-a2c0-25645631f26b", 00:24:39.533 "assigned_rate_limits": { 00:24:39.533 "rw_ios_per_sec": 0, 00:24:39.533 "rw_mbytes_per_sec": 0, 00:24:39.533 "r_mbytes_per_sec": 0, 00:24:39.533 "w_mbytes_per_sec": 0 00:24:39.533 }, 00:24:39.533 "claimed": true, 00:24:39.533 "claim_type": "exclusive_write", 00:24:39.533 "zoned": false, 00:24:39.533 "supported_io_types": { 00:24:39.533 "read": true, 00:24:39.533 "write": true, 00:24:39.533 "unmap": true, 00:24:39.533 "flush": true, 00:24:39.533 "reset": true, 00:24:39.533 "nvme_admin": false, 00:24:39.533 "nvme_io": false, 00:24:39.533 "nvme_io_md": false, 00:24:39.533 "write_zeroes": true, 00:24:39.533 "zcopy": true, 00:24:39.533 "get_zone_info": false, 00:24:39.533 "zone_management": false, 00:24:39.533 "zone_append": false, 00:24:39.533 "compare": false, 00:24:39.533 "compare_and_write": false, 00:24:39.533 "abort": true, 00:24:39.533 "seek_hole": false, 00:24:39.533 "seek_data": false, 00:24:39.533 "copy": true, 00:24:39.533 "nvme_iov_md": false 00:24:39.533 }, 00:24:39.533 "memory_domains": [ 00:24:39.533 { 00:24:39.533 "dma_device_id": "system", 00:24:39.533 "dma_device_type": 1 00:24:39.533 }, 00:24:39.533 { 00:24:39.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:39.533 "dma_device_type": 2 00:24:39.533 } 00:24:39.533 ], 00:24:39.533 "driver_specific": {} 00:24:39.533 }' 00:24:39.533 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.792 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:39.792 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:39.792 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.792 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:39.792 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:39.792 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.792 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:39.792 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:39.792 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.049 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.049 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:40.050 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:40.050 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:40.050 11:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:40.308 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:40.308 "name": "BaseBdev4", 00:24:40.308 "aliases": [ 00:24:40.308 "0919d695-c2b8-4db7-8c15-89cfe1428c92" 00:24:40.308 ], 00:24:40.308 "product_name": "Malloc disk", 00:24:40.308 "block_size": 512, 00:24:40.308 "num_blocks": 65536, 00:24:40.308 "uuid": "0919d695-c2b8-4db7-8c15-89cfe1428c92", 00:24:40.308 "assigned_rate_limits": { 00:24:40.308 "rw_ios_per_sec": 0, 00:24:40.308 "rw_mbytes_per_sec": 0, 00:24:40.308 "r_mbytes_per_sec": 0, 00:24:40.308 "w_mbytes_per_sec": 0 00:24:40.308 }, 00:24:40.308 "claimed": true, 00:24:40.308 "claim_type": "exclusive_write", 00:24:40.308 "zoned": false, 00:24:40.308 "supported_io_types": { 00:24:40.308 "read": true, 00:24:40.308 "write": true, 00:24:40.308 "unmap": true, 00:24:40.308 "flush": true, 00:24:40.308 "reset": true, 00:24:40.308 "nvme_admin": false, 00:24:40.308 "nvme_io": false, 00:24:40.308 "nvme_io_md": false, 00:24:40.308 "write_zeroes": true, 00:24:40.308 "zcopy": true, 00:24:40.308 "get_zone_info": false, 00:24:40.308 "zone_management": false, 00:24:40.308 "zone_append": false, 00:24:40.308 "compare": false, 00:24:40.308 "compare_and_write": false, 00:24:40.308 "abort": true, 00:24:40.308 "seek_hole": false, 00:24:40.308 "seek_data": false, 00:24:40.308 "copy": true, 00:24:40.308 "nvme_iov_md": false 00:24:40.308 }, 00:24:40.308 "memory_domains": [ 00:24:40.308 { 00:24:40.308 "dma_device_id": "system", 00:24:40.308 "dma_device_type": 1 00:24:40.308 }, 00:24:40.308 { 00:24:40.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:40.308 "dma_device_type": 2 00:24:40.308 } 00:24:40.308 ], 00:24:40.308 "driver_specific": {} 00:24:40.308 }' 00:24:40.308 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.308 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:40.308 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:40.308 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.308 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:40.308 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:40.308 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.308 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:40.567 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:40.567 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.567 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:40.567 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:40.567 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:40.877 [2024-07-25 11:06:47.711161] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:40.877 [2024-07-25 11:06:47.711196] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:40.877 [2024-07-25 11:06:47.711280] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:40.877 [2024-07-25 11:06:47.711366] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:40.877 [2024-07-25 11:06:47.711383] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:24:40.877 11:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3660881 00:24:40.877 11:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3660881 ']' 00:24:40.877 11:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3660881 00:24:40.877 11:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:24:40.877 11:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:40.877 11:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3660881 00:24:40.877 11:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:40.877 11:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:40.877 11:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3660881' 00:24:40.877 killing process with pid 3660881 00:24:40.877 11:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3660881 00:24:40.877 [2024-07-25 11:06:47.785777] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:40.877 11:06:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3660881 00:24:41.157 [2024-07-25 11:06:48.263011] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:43.064 11:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:24:43.064 00:24:43.064 real 0m34.306s 00:24:43.064 user 1m0.163s 00:24:43.064 sys 0m5.880s 00:24:43.064 11:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:43.064 11:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.064 ************************************ 00:24:43.064 END TEST raid_state_function_test_sb 00:24:43.064 ************************************ 00:24:43.064 11:06:49 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:24:43.064 11:06:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:24:43.064 11:06:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:43.064 11:06:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:43.064 ************************************ 00:24:43.064 START TEST raid_superblock_test 00:24:43.064 ************************************ 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=concat 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' concat '!=' raid1 ']' 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # strip_size=64 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # strip_size_create_arg='-z 64' 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3667118 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3667118 /var/tmp/spdk-raid.sock 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3667118 ']' 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:43.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:43.064 11:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:43.064 [2024-07-25 11:06:50.140963] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:43.064 [2024-07-25 11:06:50.141085] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667118 ] 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:01.0 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:01.1 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:01.2 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:01.3 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:01.4 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:01.5 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:01.6 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:01.7 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:02.0 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:02.1 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:02.2 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:02.3 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:02.4 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:02.5 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:02.6 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3d:02.7 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:01.0 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:01.1 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:01.2 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:01.3 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:01.4 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:01.5 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:01.6 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:01.7 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:02.0 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:02.1 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:02.2 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:02.3 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:02.4 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:02.5 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:02.6 cannot be used 00:24:43.324 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:24:43.324 EAL: Requested device 0000:3f:02.7 cannot be used 00:24:43.325 [2024-07-25 11:06:50.368712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.584 [2024-07-25 11:06:50.629234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:44.152 [2024-07-25 11:06:50.991973] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:44.152 [2024-07-25 11:06:50.992006] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:44.152 11:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.152 11:06:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:24:44.152 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:24:44.152 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:44.152 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:24:44.152 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:24:44.152 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:44.152 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:44.152 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:44.152 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:44.152 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:44.412 malloc1 00:24:44.412 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:44.671 [2024-07-25 11:06:51.683628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:44.671 [2024-07-25 11:06:51.683692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:44.671 [2024-07-25 11:06:51.683723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:24:44.672 [2024-07-25 11:06:51.683740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:44.672 [2024-07-25 11:06:51.686511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:44.672 [2024-07-25 11:06:51.686547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:44.672 pt1 00:24:44.672 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:44.672 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:44.672 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:24:44.672 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:24:44.672 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:44.672 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:44.672 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:44.672 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:44.672 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:44.931 malloc2 00:24:44.931 11:06:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:45.190 [2024-07-25 11:06:52.180458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:45.190 [2024-07-25 11:06:52.180515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.190 [2024-07-25 11:06:52.180543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:24:45.190 [2024-07-25 11:06:52.180558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.190 [2024-07-25 11:06:52.183327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.190 [2024-07-25 11:06:52.183367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:45.190 pt2 00:24:45.190 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:45.190 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:45.190 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:24:45.190 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:24:45.190 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:45.190 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:45.190 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:45.190 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:45.190 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:45.449 malloc3 00:24:45.449 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:45.708 [2024-07-25 11:06:52.676921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:45.708 [2024-07-25 11:06:52.676982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.708 [2024-07-25 11:06:52.677013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:24:45.708 [2024-07-25 11:06:52.677029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.708 [2024-07-25 11:06:52.679758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.708 [2024-07-25 11:06:52.679793] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:45.708 pt3 00:24:45.708 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:45.708 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:45.708 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:24:45.708 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:24:45.708 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:45.708 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:45.708 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:24:45.708 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:45.709 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:45.968 malloc4 00:24:45.968 11:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:46.227 [2024-07-25 11:06:53.172031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:46.227 [2024-07-25 11:06:53.172098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:46.227 [2024-07-25 11:06:53.172125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:24:46.227 [2024-07-25 11:06:53.172147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:46.227 [2024-07-25 11:06:53.174912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:46.227 [2024-07-25 11:06:53.174945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:46.227 pt4 00:24:46.227 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:24:46.227 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:24:46.227 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:46.485 [2024-07-25 11:06:53.396754] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:46.485 [2024-07-25 11:06:53.399108] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:46.485 [2024-07-25 11:06:53.399207] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:46.485 [2024-07-25 11:06:53.399265] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:46.485 [2024-07-25 11:06:53.399482] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:24:46.485 [2024-07-25 11:06:53.399498] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:46.485 [2024-07-25 11:06:53.399857] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:24:46.485 [2024-07-25 11:06:53.400103] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:24:46.485 [2024-07-25 11:06:53.400124] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:24:46.485 [2024-07-25 11:06:53.400348] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:46.485 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:46.485 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:46.485 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:46.485 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:46.485 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:46.485 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:46.485 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:46.485 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:46.485 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:46.485 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:46.485 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.485 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.744 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:46.744 "name": "raid_bdev1", 00:24:46.744 "uuid": "94eabfd9-a88d-4b80-b94c-e93d4601d7bb", 00:24:46.744 "strip_size_kb": 64, 00:24:46.744 "state": "online", 00:24:46.744 "raid_level": "concat", 00:24:46.744 "superblock": true, 00:24:46.744 "num_base_bdevs": 4, 00:24:46.744 "num_base_bdevs_discovered": 4, 00:24:46.744 "num_base_bdevs_operational": 4, 00:24:46.744 "base_bdevs_list": [ 00:24:46.744 { 00:24:46.744 "name": "pt1", 00:24:46.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:46.744 "is_configured": true, 00:24:46.744 "data_offset": 2048, 00:24:46.744 "data_size": 63488 00:24:46.744 }, 00:24:46.744 { 00:24:46.744 "name": "pt2", 00:24:46.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:46.744 "is_configured": true, 00:24:46.744 "data_offset": 2048, 00:24:46.744 "data_size": 63488 00:24:46.744 }, 00:24:46.744 { 00:24:46.744 "name": "pt3", 00:24:46.744 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:46.744 "is_configured": true, 00:24:46.744 "data_offset": 2048, 00:24:46.744 "data_size": 63488 00:24:46.744 }, 00:24:46.744 { 00:24:46.744 "name": "pt4", 00:24:46.744 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:46.744 "is_configured": true, 00:24:46.744 "data_offset": 2048, 00:24:46.744 "data_size": 63488 00:24:46.744 } 00:24:46.744 ] 00:24:46.744 }' 00:24:46.744 11:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:46.744 11:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:47.312 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:24:47.312 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:47.312 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:47.312 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:47.312 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:47.312 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:47.312 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:47.312 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:47.312 [2024-07-25 11:06:54.427890] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:47.572 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:47.572 "name": "raid_bdev1", 00:24:47.572 "aliases": [ 00:24:47.572 "94eabfd9-a88d-4b80-b94c-e93d4601d7bb" 00:24:47.572 ], 00:24:47.572 "product_name": "Raid Volume", 00:24:47.572 "block_size": 512, 00:24:47.572 "num_blocks": 253952, 00:24:47.572 "uuid": "94eabfd9-a88d-4b80-b94c-e93d4601d7bb", 00:24:47.572 "assigned_rate_limits": { 00:24:47.572 "rw_ios_per_sec": 0, 00:24:47.572 "rw_mbytes_per_sec": 0, 00:24:47.572 "r_mbytes_per_sec": 0, 00:24:47.572 "w_mbytes_per_sec": 0 00:24:47.572 }, 00:24:47.572 "claimed": false, 00:24:47.572 "zoned": false, 00:24:47.572 "supported_io_types": { 00:24:47.572 "read": true, 00:24:47.572 "write": true, 00:24:47.572 "unmap": true, 00:24:47.572 "flush": true, 00:24:47.572 "reset": true, 00:24:47.572 "nvme_admin": false, 00:24:47.572 "nvme_io": false, 00:24:47.572 "nvme_io_md": false, 00:24:47.572 "write_zeroes": true, 00:24:47.572 "zcopy": false, 00:24:47.572 "get_zone_info": false, 00:24:47.572 "zone_management": false, 00:24:47.572 "zone_append": false, 00:24:47.572 "compare": false, 00:24:47.572 "compare_and_write": false, 00:24:47.572 "abort": false, 00:24:47.572 "seek_hole": false, 00:24:47.572 "seek_data": false, 00:24:47.572 "copy": false, 00:24:47.572 "nvme_iov_md": false 00:24:47.572 }, 00:24:47.572 "memory_domains": [ 00:24:47.572 { 00:24:47.572 "dma_device_id": "system", 00:24:47.572 "dma_device_type": 1 00:24:47.572 }, 00:24:47.572 { 00:24:47.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.572 "dma_device_type": 2 00:24:47.572 }, 00:24:47.572 { 00:24:47.572 "dma_device_id": "system", 00:24:47.572 "dma_device_type": 1 00:24:47.572 }, 00:24:47.572 { 00:24:47.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.572 "dma_device_type": 2 00:24:47.572 }, 00:24:47.572 { 00:24:47.572 "dma_device_id": "system", 00:24:47.572 "dma_device_type": 1 00:24:47.572 }, 00:24:47.572 { 00:24:47.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.572 "dma_device_type": 2 00:24:47.572 }, 00:24:47.572 { 00:24:47.572 "dma_device_id": "system", 00:24:47.572 "dma_device_type": 1 00:24:47.572 }, 00:24:47.572 { 00:24:47.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.572 "dma_device_type": 2 00:24:47.572 } 00:24:47.572 ], 00:24:47.572 "driver_specific": { 00:24:47.572 "raid": { 00:24:47.572 "uuid": "94eabfd9-a88d-4b80-b94c-e93d4601d7bb", 00:24:47.572 "strip_size_kb": 64, 00:24:47.572 "state": "online", 00:24:47.572 "raid_level": "concat", 00:24:47.572 "superblock": true, 00:24:47.572 "num_base_bdevs": 4, 00:24:47.572 "num_base_bdevs_discovered": 4, 00:24:47.572 "num_base_bdevs_operational": 4, 00:24:47.572 "base_bdevs_list": [ 00:24:47.572 { 00:24:47.572 "name": "pt1", 00:24:47.572 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:47.572 "is_configured": true, 00:24:47.572 "data_offset": 2048, 00:24:47.572 "data_size": 63488 00:24:47.572 }, 00:24:47.572 { 00:24:47.572 "name": "pt2", 00:24:47.572 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:47.572 "is_configured": true, 00:24:47.572 "data_offset": 2048, 00:24:47.572 "data_size": 63488 00:24:47.572 }, 00:24:47.572 { 00:24:47.572 "name": "pt3", 00:24:47.572 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:47.572 "is_configured": true, 00:24:47.572 "data_offset": 2048, 00:24:47.572 "data_size": 63488 00:24:47.572 }, 00:24:47.572 { 00:24:47.572 "name": "pt4", 00:24:47.572 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:47.572 "is_configured": true, 00:24:47.572 "data_offset": 2048, 00:24:47.572 "data_size": 63488 00:24:47.572 } 00:24:47.572 ] 00:24:47.572 } 00:24:47.572 } 00:24:47.572 }' 00:24:47.572 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:47.572 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:47.572 pt2 00:24:47.572 pt3 00:24:47.572 pt4' 00:24:47.572 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:47.572 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:47.572 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:47.831 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:47.831 "name": "pt1", 00:24:47.831 "aliases": [ 00:24:47.831 "00000000-0000-0000-0000-000000000001" 00:24:47.831 ], 00:24:47.831 "product_name": "passthru", 00:24:47.831 "block_size": 512, 00:24:47.831 "num_blocks": 65536, 00:24:47.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:47.831 "assigned_rate_limits": { 00:24:47.831 "rw_ios_per_sec": 0, 00:24:47.831 "rw_mbytes_per_sec": 0, 00:24:47.831 "r_mbytes_per_sec": 0, 00:24:47.831 "w_mbytes_per_sec": 0 00:24:47.831 }, 00:24:47.831 "claimed": true, 00:24:47.831 "claim_type": "exclusive_write", 00:24:47.831 "zoned": false, 00:24:47.831 "supported_io_types": { 00:24:47.831 "read": true, 00:24:47.831 "write": true, 00:24:47.831 "unmap": true, 00:24:47.831 "flush": true, 00:24:47.831 "reset": true, 00:24:47.831 "nvme_admin": false, 00:24:47.831 "nvme_io": false, 00:24:47.831 "nvme_io_md": false, 00:24:47.831 "write_zeroes": true, 00:24:47.831 "zcopy": true, 00:24:47.831 "get_zone_info": false, 00:24:47.831 "zone_management": false, 00:24:47.831 "zone_append": false, 00:24:47.831 "compare": false, 00:24:47.831 "compare_and_write": false, 00:24:47.831 "abort": true, 00:24:47.831 "seek_hole": false, 00:24:47.831 "seek_data": false, 00:24:47.831 "copy": true, 00:24:47.831 "nvme_iov_md": false 00:24:47.831 }, 00:24:47.831 "memory_domains": [ 00:24:47.831 { 00:24:47.831 "dma_device_id": "system", 00:24:47.831 "dma_device_type": 1 00:24:47.831 }, 00:24:47.831 { 00:24:47.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.831 "dma_device_type": 2 00:24:47.831 } 00:24:47.831 ], 00:24:47.831 "driver_specific": { 00:24:47.831 "passthru": { 00:24:47.831 "name": "pt1", 00:24:47.831 "base_bdev_name": "malloc1" 00:24:47.831 } 00:24:47.831 } 00:24:47.831 }' 00:24:47.831 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:47.831 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:47.831 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:47.831 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:47.831 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:47.831 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:47.831 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:47.831 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:48.090 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:48.090 11:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:48.090 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:48.090 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:48.090 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:48.090 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:48.090 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:48.350 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:48.350 "name": "pt2", 00:24:48.350 "aliases": [ 00:24:48.350 "00000000-0000-0000-0000-000000000002" 00:24:48.350 ], 00:24:48.350 "product_name": "passthru", 00:24:48.350 "block_size": 512, 00:24:48.350 "num_blocks": 65536, 00:24:48.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:48.350 "assigned_rate_limits": { 00:24:48.350 "rw_ios_per_sec": 0, 00:24:48.350 "rw_mbytes_per_sec": 0, 00:24:48.350 "r_mbytes_per_sec": 0, 00:24:48.350 "w_mbytes_per_sec": 0 00:24:48.350 }, 00:24:48.350 "claimed": true, 00:24:48.350 "claim_type": "exclusive_write", 00:24:48.350 "zoned": false, 00:24:48.350 "supported_io_types": { 00:24:48.350 "read": true, 00:24:48.350 "write": true, 00:24:48.350 "unmap": true, 00:24:48.350 "flush": true, 00:24:48.350 "reset": true, 00:24:48.350 "nvme_admin": false, 00:24:48.350 "nvme_io": false, 00:24:48.350 "nvme_io_md": false, 00:24:48.350 "write_zeroes": true, 00:24:48.350 "zcopy": true, 00:24:48.350 "get_zone_info": false, 00:24:48.350 "zone_management": false, 00:24:48.350 "zone_append": false, 00:24:48.350 "compare": false, 00:24:48.350 "compare_and_write": false, 00:24:48.350 "abort": true, 00:24:48.350 "seek_hole": false, 00:24:48.350 "seek_data": false, 00:24:48.350 "copy": true, 00:24:48.350 "nvme_iov_md": false 00:24:48.350 }, 00:24:48.350 "memory_domains": [ 00:24:48.350 { 00:24:48.350 "dma_device_id": "system", 00:24:48.350 "dma_device_type": 1 00:24:48.350 }, 00:24:48.350 { 00:24:48.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.350 "dma_device_type": 2 00:24:48.350 } 00:24:48.350 ], 00:24:48.350 "driver_specific": { 00:24:48.350 "passthru": { 00:24:48.350 "name": "pt2", 00:24:48.350 "base_bdev_name": "malloc2" 00:24:48.350 } 00:24:48.350 } 00:24:48.350 }' 00:24:48.350 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:48.350 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:48.350 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:48.350 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:48.350 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:48.609 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:48.609 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:48.609 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:48.609 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:48.609 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:48.609 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:48.609 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:48.609 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:48.609 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:48.609 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:48.868 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:48.868 "name": "pt3", 00:24:48.868 "aliases": [ 00:24:48.868 "00000000-0000-0000-0000-000000000003" 00:24:48.868 ], 00:24:48.869 "product_name": "passthru", 00:24:48.869 "block_size": 512, 00:24:48.869 "num_blocks": 65536, 00:24:48.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:48.869 "assigned_rate_limits": { 00:24:48.869 "rw_ios_per_sec": 0, 00:24:48.869 "rw_mbytes_per_sec": 0, 00:24:48.869 "r_mbytes_per_sec": 0, 00:24:48.869 "w_mbytes_per_sec": 0 00:24:48.869 }, 00:24:48.869 "claimed": true, 00:24:48.869 "claim_type": "exclusive_write", 00:24:48.869 "zoned": false, 00:24:48.869 "supported_io_types": { 00:24:48.869 "read": true, 00:24:48.869 "write": true, 00:24:48.869 "unmap": true, 00:24:48.869 "flush": true, 00:24:48.869 "reset": true, 00:24:48.869 "nvme_admin": false, 00:24:48.869 "nvme_io": false, 00:24:48.869 "nvme_io_md": false, 00:24:48.869 "write_zeroes": true, 00:24:48.869 "zcopy": true, 00:24:48.869 "get_zone_info": false, 00:24:48.869 "zone_management": false, 00:24:48.869 "zone_append": false, 00:24:48.869 "compare": false, 00:24:48.869 "compare_and_write": false, 00:24:48.869 "abort": true, 00:24:48.869 "seek_hole": false, 00:24:48.869 "seek_data": false, 00:24:48.869 "copy": true, 00:24:48.869 "nvme_iov_md": false 00:24:48.869 }, 00:24:48.869 "memory_domains": [ 00:24:48.869 { 00:24:48.869 "dma_device_id": "system", 00:24:48.869 "dma_device_type": 1 00:24:48.869 }, 00:24:48.869 { 00:24:48.869 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:48.869 "dma_device_type": 2 00:24:48.869 } 00:24:48.869 ], 00:24:48.869 "driver_specific": { 00:24:48.869 "passthru": { 00:24:48.869 "name": "pt3", 00:24:48.869 "base_bdev_name": "malloc3" 00:24:48.869 } 00:24:48.869 } 00:24:48.869 }' 00:24:48.869 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:48.869 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:48.869 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:48.869 11:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.128 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.128 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:49.128 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:49.128 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:49.128 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:49.128 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:49.128 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:49.128 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:49.128 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:49.128 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:49.128 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:49.387 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:49.387 "name": "pt4", 00:24:49.387 "aliases": [ 00:24:49.387 "00000000-0000-0000-0000-000000000004" 00:24:49.387 ], 00:24:49.387 "product_name": "passthru", 00:24:49.387 "block_size": 512, 00:24:49.387 "num_blocks": 65536, 00:24:49.387 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:49.387 "assigned_rate_limits": { 00:24:49.387 "rw_ios_per_sec": 0, 00:24:49.387 "rw_mbytes_per_sec": 0, 00:24:49.387 "r_mbytes_per_sec": 0, 00:24:49.387 "w_mbytes_per_sec": 0 00:24:49.387 }, 00:24:49.387 "claimed": true, 00:24:49.387 "claim_type": "exclusive_write", 00:24:49.387 "zoned": false, 00:24:49.387 "supported_io_types": { 00:24:49.387 "read": true, 00:24:49.387 "write": true, 00:24:49.387 "unmap": true, 00:24:49.387 "flush": true, 00:24:49.387 "reset": true, 00:24:49.387 "nvme_admin": false, 00:24:49.387 "nvme_io": false, 00:24:49.387 "nvme_io_md": false, 00:24:49.387 "write_zeroes": true, 00:24:49.387 "zcopy": true, 00:24:49.387 "get_zone_info": false, 00:24:49.387 "zone_management": false, 00:24:49.387 "zone_append": false, 00:24:49.387 "compare": false, 00:24:49.387 "compare_and_write": false, 00:24:49.387 "abort": true, 00:24:49.387 "seek_hole": false, 00:24:49.387 "seek_data": false, 00:24:49.387 "copy": true, 00:24:49.387 "nvme_iov_md": false 00:24:49.387 }, 00:24:49.387 "memory_domains": [ 00:24:49.387 { 00:24:49.387 "dma_device_id": "system", 00:24:49.387 "dma_device_type": 1 00:24:49.387 }, 00:24:49.387 { 00:24:49.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:49.387 "dma_device_type": 2 00:24:49.387 } 00:24:49.387 ], 00:24:49.387 "driver_specific": { 00:24:49.387 "passthru": { 00:24:49.387 "name": "pt4", 00:24:49.387 "base_bdev_name": "malloc4" 00:24:49.387 } 00:24:49.387 } 00:24:49.387 }' 00:24:49.388 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.388 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:49.647 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:49.647 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.647 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:49.647 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:49.647 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:49.647 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:49.647 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:49.647 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:49.647 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:49.647 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:49.647 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:49.647 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:24:49.906 [2024-07-25 11:06:56.950818] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:49.906 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=94eabfd9-a88d-4b80-b94c-e93d4601d7bb 00:24:49.906 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 94eabfd9-a88d-4b80-b94c-e93d4601d7bb ']' 00:24:49.906 11:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:50.165 [2024-07-25 11:06:57.179041] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:50.165 [2024-07-25 11:06:57.179073] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:50.165 [2024-07-25 11:06:57.179172] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:50.165 [2024-07-25 11:06:57.179256] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:50.165 [2024-07-25 11:06:57.179276] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:24:50.165 11:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.165 11:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:24:50.424 11:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:24:50.424 11:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:24:50.424 11:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:50.424 11:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:50.683 11:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:50.683 11:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:50.943 11:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:50.943 11:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:51.203 11:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:24:51.203 11:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:24:51.462 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:51.721 [2024-07-25 11:06:58.779369] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:51.721 [2024-07-25 11:06:58.781696] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:51.721 [2024-07-25 11:06:58.781752] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:51.721 [2024-07-25 11:06:58.781797] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:51.721 [2024-07-25 11:06:58.781853] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:51.721 [2024-07-25 11:06:58.781908] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:51.721 [2024-07-25 11:06:58.781936] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:51.721 [2024-07-25 11:06:58.781967] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:51.721 [2024-07-25 11:06:58.781989] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:51.721 [2024-07-25 11:06:58.782007] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:24:51.721 request: 00:24:51.721 { 00:24:51.721 "name": "raid_bdev1", 00:24:51.721 "raid_level": "concat", 00:24:51.721 "base_bdevs": [ 00:24:51.721 "malloc1", 00:24:51.721 "malloc2", 00:24:51.721 "malloc3", 00:24:51.721 "malloc4" 00:24:51.721 ], 00:24:51.721 "strip_size_kb": 64, 00:24:51.721 "superblock": false, 00:24:51.721 "method": "bdev_raid_create", 00:24:51.721 "req_id": 1 00:24:51.721 } 00:24:51.721 Got JSON-RPC error response 00:24:51.721 response: 00:24:51.721 { 00:24:51.721 "code": -17, 00:24:51.721 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:51.721 } 00:24:51.721 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:24:51.721 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:51.721 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:51.721 11:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:51.721 11:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.721 11:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:24:51.980 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:24:51.980 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:24:51.980 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:52.239 [2024-07-25 11:06:59.232517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:52.239 [2024-07-25 11:06:59.232586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.239 [2024-07-25 11:06:59.232609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:24:52.239 [2024-07-25 11:06:59.232627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.239 [2024-07-25 11:06:59.235383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.239 [2024-07-25 11:06:59.235419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:52.239 [2024-07-25 11:06:59.235517] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:52.239 [2024-07-25 11:06:59.235595] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:52.239 pt1 00:24:52.239 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:52.239 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:52.239 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:52.239 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:52.239 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:52.239 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:52.239 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:52.239 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:52.239 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:52.239 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:52.239 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.239 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.498 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:52.498 "name": "raid_bdev1", 00:24:52.498 "uuid": "94eabfd9-a88d-4b80-b94c-e93d4601d7bb", 00:24:52.498 "strip_size_kb": 64, 00:24:52.498 "state": "configuring", 00:24:52.498 "raid_level": "concat", 00:24:52.498 "superblock": true, 00:24:52.498 "num_base_bdevs": 4, 00:24:52.498 "num_base_bdevs_discovered": 1, 00:24:52.498 "num_base_bdevs_operational": 4, 00:24:52.498 "base_bdevs_list": [ 00:24:52.498 { 00:24:52.498 "name": "pt1", 00:24:52.498 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:52.498 "is_configured": true, 00:24:52.498 "data_offset": 2048, 00:24:52.498 "data_size": 63488 00:24:52.498 }, 00:24:52.498 { 00:24:52.498 "name": null, 00:24:52.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:52.498 "is_configured": false, 00:24:52.498 "data_offset": 2048, 00:24:52.498 "data_size": 63488 00:24:52.498 }, 00:24:52.498 { 00:24:52.498 "name": null, 00:24:52.498 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:52.498 "is_configured": false, 00:24:52.498 "data_offset": 2048, 00:24:52.498 "data_size": 63488 00:24:52.498 }, 00:24:52.498 { 00:24:52.498 "name": null, 00:24:52.498 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:52.498 "is_configured": false, 00:24:52.498 "data_offset": 2048, 00:24:52.498 "data_size": 63488 00:24:52.498 } 00:24:52.498 ] 00:24:52.498 }' 00:24:52.498 11:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:52.498 11:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.067 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:24:53.067 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:53.326 [2024-07-25 11:07:00.251274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:53.326 [2024-07-25 11:07:00.251343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.326 [2024-07-25 11:07:00.251367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:24:53.326 [2024-07-25 11:07:00.251386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.326 [2024-07-25 11:07:00.251946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.326 [2024-07-25 11:07:00.251974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:53.326 [2024-07-25 11:07:00.252062] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:53.326 [2024-07-25 11:07:00.252095] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:53.326 pt2 00:24:53.326 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:53.585 [2024-07-25 11:07:00.479882] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:53.585 "name": "raid_bdev1", 00:24:53.585 "uuid": "94eabfd9-a88d-4b80-b94c-e93d4601d7bb", 00:24:53.585 "strip_size_kb": 64, 00:24:53.585 "state": "configuring", 00:24:53.585 "raid_level": "concat", 00:24:53.585 "superblock": true, 00:24:53.585 "num_base_bdevs": 4, 00:24:53.585 "num_base_bdevs_discovered": 1, 00:24:53.585 "num_base_bdevs_operational": 4, 00:24:53.585 "base_bdevs_list": [ 00:24:53.585 { 00:24:53.585 "name": "pt1", 00:24:53.585 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:53.585 "is_configured": true, 00:24:53.585 "data_offset": 2048, 00:24:53.585 "data_size": 63488 00:24:53.585 }, 00:24:53.585 { 00:24:53.585 "name": null, 00:24:53.585 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:53.585 "is_configured": false, 00:24:53.585 "data_offset": 2048, 00:24:53.585 "data_size": 63488 00:24:53.585 }, 00:24:53.585 { 00:24:53.585 "name": null, 00:24:53.585 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:53.585 "is_configured": false, 00:24:53.585 "data_offset": 2048, 00:24:53.585 "data_size": 63488 00:24:53.585 }, 00:24:53.585 { 00:24:53.585 "name": null, 00:24:53.585 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:53.585 "is_configured": false, 00:24:53.585 "data_offset": 2048, 00:24:53.585 "data_size": 63488 00:24:53.585 } 00:24:53.585 ] 00:24:53.585 }' 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:53.585 11:07:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:54.153 11:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:24:54.153 11:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:54.153 11:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:54.781 [2024-07-25 11:07:01.739279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:54.781 [2024-07-25 11:07:01.739343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:54.781 [2024-07-25 11:07:01.739370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:24:54.781 [2024-07-25 11:07:01.739387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:54.781 [2024-07-25 11:07:01.739950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:54.781 [2024-07-25 11:07:01.739975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:54.781 [2024-07-25 11:07:01.740072] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:54.781 [2024-07-25 11:07:01.740098] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:54.781 pt2 00:24:54.781 11:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:24:54.781 11:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:54.781 11:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:55.040 [2024-07-25 11:07:01.979951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:55.040 [2024-07-25 11:07:01.980008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.040 [2024-07-25 11:07:01.980040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:24:55.040 [2024-07-25 11:07:01.980055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.040 [2024-07-25 11:07:01.980679] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.040 [2024-07-25 11:07:01.980706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:55.040 [2024-07-25 11:07:01.980793] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:55.040 [2024-07-25 11:07:01.980819] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:55.040 pt3 00:24:55.040 11:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:24:55.040 11:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:55.040 11:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:55.607 [2024-07-25 11:07:02.477442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:55.607 [2024-07-25 11:07:02.477500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.607 [2024-07-25 11:07:02.477527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:24:55.607 [2024-07-25 11:07:02.477543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.607 [2024-07-25 11:07:02.478078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.607 [2024-07-25 11:07:02.478103] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:55.607 [2024-07-25 11:07:02.478207] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:55.607 [2024-07-25 11:07:02.478235] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:55.607 [2024-07-25 11:07:02.478436] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:24:55.607 [2024-07-25 11:07:02.478450] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:55.607 [2024-07-25 11:07:02.478763] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:24:55.607 [2024-07-25 11:07:02.478998] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:24:55.607 [2024-07-25 11:07:02.479017] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:24:55.607 [2024-07-25 11:07:02.479196] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:55.607 pt4 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.607 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.866 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:55.866 "name": "raid_bdev1", 00:24:55.866 "uuid": "94eabfd9-a88d-4b80-b94c-e93d4601d7bb", 00:24:55.866 "strip_size_kb": 64, 00:24:55.867 "state": "online", 00:24:55.867 "raid_level": "concat", 00:24:55.867 "superblock": true, 00:24:55.867 "num_base_bdevs": 4, 00:24:55.867 "num_base_bdevs_discovered": 4, 00:24:55.867 "num_base_bdevs_operational": 4, 00:24:55.867 "base_bdevs_list": [ 00:24:55.867 { 00:24:55.867 "name": "pt1", 00:24:55.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:55.867 "is_configured": true, 00:24:55.867 "data_offset": 2048, 00:24:55.867 "data_size": 63488 00:24:55.867 }, 00:24:55.867 { 00:24:55.867 "name": "pt2", 00:24:55.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:55.867 "is_configured": true, 00:24:55.867 "data_offset": 2048, 00:24:55.867 "data_size": 63488 00:24:55.867 }, 00:24:55.867 { 00:24:55.867 "name": "pt3", 00:24:55.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:55.867 "is_configured": true, 00:24:55.867 "data_offset": 2048, 00:24:55.867 "data_size": 63488 00:24:55.867 }, 00:24:55.867 { 00:24:55.867 "name": "pt4", 00:24:55.867 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:55.867 "is_configured": true, 00:24:55.867 "data_offset": 2048, 00:24:55.867 "data_size": 63488 00:24:55.867 } 00:24:55.867 ] 00:24:55.867 }' 00:24:55.867 11:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:55.867 11:07:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.434 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:24:56.434 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:56.434 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:56.434 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:56.434 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:56.434 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:56.434 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:56.434 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:56.434 [2024-07-25 11:07:03.532676] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:56.434 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:56.434 "name": "raid_bdev1", 00:24:56.434 "aliases": [ 00:24:56.434 "94eabfd9-a88d-4b80-b94c-e93d4601d7bb" 00:24:56.434 ], 00:24:56.434 "product_name": "Raid Volume", 00:24:56.434 "block_size": 512, 00:24:56.434 "num_blocks": 253952, 00:24:56.434 "uuid": "94eabfd9-a88d-4b80-b94c-e93d4601d7bb", 00:24:56.434 "assigned_rate_limits": { 00:24:56.434 "rw_ios_per_sec": 0, 00:24:56.434 "rw_mbytes_per_sec": 0, 00:24:56.434 "r_mbytes_per_sec": 0, 00:24:56.434 "w_mbytes_per_sec": 0 00:24:56.434 }, 00:24:56.434 "claimed": false, 00:24:56.434 "zoned": false, 00:24:56.434 "supported_io_types": { 00:24:56.434 "read": true, 00:24:56.434 "write": true, 00:24:56.434 "unmap": true, 00:24:56.434 "flush": true, 00:24:56.434 "reset": true, 00:24:56.434 "nvme_admin": false, 00:24:56.434 "nvme_io": false, 00:24:56.434 "nvme_io_md": false, 00:24:56.434 "write_zeroes": true, 00:24:56.434 "zcopy": false, 00:24:56.434 "get_zone_info": false, 00:24:56.434 "zone_management": false, 00:24:56.434 "zone_append": false, 00:24:56.434 "compare": false, 00:24:56.434 "compare_and_write": false, 00:24:56.434 "abort": false, 00:24:56.434 "seek_hole": false, 00:24:56.434 "seek_data": false, 00:24:56.434 "copy": false, 00:24:56.434 "nvme_iov_md": false 00:24:56.434 }, 00:24:56.434 "memory_domains": [ 00:24:56.434 { 00:24:56.434 "dma_device_id": "system", 00:24:56.434 "dma_device_type": 1 00:24:56.434 }, 00:24:56.434 { 00:24:56.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.434 "dma_device_type": 2 00:24:56.434 }, 00:24:56.434 { 00:24:56.434 "dma_device_id": "system", 00:24:56.434 "dma_device_type": 1 00:24:56.434 }, 00:24:56.434 { 00:24:56.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.434 "dma_device_type": 2 00:24:56.434 }, 00:24:56.434 { 00:24:56.434 "dma_device_id": "system", 00:24:56.434 "dma_device_type": 1 00:24:56.434 }, 00:24:56.434 { 00:24:56.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.434 "dma_device_type": 2 00:24:56.434 }, 00:24:56.434 { 00:24:56.434 "dma_device_id": "system", 00:24:56.434 "dma_device_type": 1 00:24:56.434 }, 00:24:56.434 { 00:24:56.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.434 "dma_device_type": 2 00:24:56.434 } 00:24:56.434 ], 00:24:56.434 "driver_specific": { 00:24:56.434 "raid": { 00:24:56.434 "uuid": "94eabfd9-a88d-4b80-b94c-e93d4601d7bb", 00:24:56.434 "strip_size_kb": 64, 00:24:56.434 "state": "online", 00:24:56.434 "raid_level": "concat", 00:24:56.434 "superblock": true, 00:24:56.434 "num_base_bdevs": 4, 00:24:56.434 "num_base_bdevs_discovered": 4, 00:24:56.434 "num_base_bdevs_operational": 4, 00:24:56.434 "base_bdevs_list": [ 00:24:56.434 { 00:24:56.434 "name": "pt1", 00:24:56.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:56.434 "is_configured": true, 00:24:56.434 "data_offset": 2048, 00:24:56.434 "data_size": 63488 00:24:56.434 }, 00:24:56.434 { 00:24:56.434 "name": "pt2", 00:24:56.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:56.434 "is_configured": true, 00:24:56.434 "data_offset": 2048, 00:24:56.434 "data_size": 63488 00:24:56.434 }, 00:24:56.434 { 00:24:56.434 "name": "pt3", 00:24:56.434 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:56.434 "is_configured": true, 00:24:56.434 "data_offset": 2048, 00:24:56.434 "data_size": 63488 00:24:56.434 }, 00:24:56.434 { 00:24:56.434 "name": "pt4", 00:24:56.434 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:56.434 "is_configured": true, 00:24:56.434 "data_offset": 2048, 00:24:56.434 "data_size": 63488 00:24:56.434 } 00:24:56.434 ] 00:24:56.434 } 00:24:56.434 } 00:24:56.434 }' 00:24:56.692 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:56.692 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:56.692 pt2 00:24:56.692 pt3 00:24:56.692 pt4' 00:24:56.692 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:56.692 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:56.692 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:56.950 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:56.950 "name": "pt1", 00:24:56.950 "aliases": [ 00:24:56.950 "00000000-0000-0000-0000-000000000001" 00:24:56.950 ], 00:24:56.950 "product_name": "passthru", 00:24:56.950 "block_size": 512, 00:24:56.950 "num_blocks": 65536, 00:24:56.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:56.950 "assigned_rate_limits": { 00:24:56.950 "rw_ios_per_sec": 0, 00:24:56.950 "rw_mbytes_per_sec": 0, 00:24:56.950 "r_mbytes_per_sec": 0, 00:24:56.950 "w_mbytes_per_sec": 0 00:24:56.950 }, 00:24:56.950 "claimed": true, 00:24:56.950 "claim_type": "exclusive_write", 00:24:56.950 "zoned": false, 00:24:56.950 "supported_io_types": { 00:24:56.950 "read": true, 00:24:56.950 "write": true, 00:24:56.950 "unmap": true, 00:24:56.950 "flush": true, 00:24:56.950 "reset": true, 00:24:56.950 "nvme_admin": false, 00:24:56.950 "nvme_io": false, 00:24:56.950 "nvme_io_md": false, 00:24:56.950 "write_zeroes": true, 00:24:56.950 "zcopy": true, 00:24:56.950 "get_zone_info": false, 00:24:56.950 "zone_management": false, 00:24:56.950 "zone_append": false, 00:24:56.950 "compare": false, 00:24:56.950 "compare_and_write": false, 00:24:56.950 "abort": true, 00:24:56.950 "seek_hole": false, 00:24:56.950 "seek_data": false, 00:24:56.950 "copy": true, 00:24:56.950 "nvme_iov_md": false 00:24:56.950 }, 00:24:56.950 "memory_domains": [ 00:24:56.950 { 00:24:56.950 "dma_device_id": "system", 00:24:56.950 "dma_device_type": 1 00:24:56.950 }, 00:24:56.950 { 00:24:56.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.950 "dma_device_type": 2 00:24:56.950 } 00:24:56.950 ], 00:24:56.950 "driver_specific": { 00:24:56.950 "passthru": { 00:24:56.950 "name": "pt1", 00:24:56.950 "base_bdev_name": "malloc1" 00:24:56.950 } 00:24:56.950 } 00:24:56.950 }' 00:24:56.951 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:56.951 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:56.951 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:56.951 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:56.951 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:56.951 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:56.951 11:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:56.951 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:56.951 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:57.209 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:57.209 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:57.209 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:57.209 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:57.209 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:57.210 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:57.469 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:57.469 "name": "pt2", 00:24:57.469 "aliases": [ 00:24:57.469 "00000000-0000-0000-0000-000000000002" 00:24:57.469 ], 00:24:57.469 "product_name": "passthru", 00:24:57.469 "block_size": 512, 00:24:57.469 "num_blocks": 65536, 00:24:57.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:57.469 "assigned_rate_limits": { 00:24:57.469 "rw_ios_per_sec": 0, 00:24:57.469 "rw_mbytes_per_sec": 0, 00:24:57.469 "r_mbytes_per_sec": 0, 00:24:57.469 "w_mbytes_per_sec": 0 00:24:57.469 }, 00:24:57.469 "claimed": true, 00:24:57.469 "claim_type": "exclusive_write", 00:24:57.469 "zoned": false, 00:24:57.469 "supported_io_types": { 00:24:57.469 "read": true, 00:24:57.469 "write": true, 00:24:57.469 "unmap": true, 00:24:57.469 "flush": true, 00:24:57.469 "reset": true, 00:24:57.469 "nvme_admin": false, 00:24:57.469 "nvme_io": false, 00:24:57.469 "nvme_io_md": false, 00:24:57.469 "write_zeroes": true, 00:24:57.469 "zcopy": true, 00:24:57.469 "get_zone_info": false, 00:24:57.469 "zone_management": false, 00:24:57.469 "zone_append": false, 00:24:57.469 "compare": false, 00:24:57.469 "compare_and_write": false, 00:24:57.469 "abort": true, 00:24:57.469 "seek_hole": false, 00:24:57.469 "seek_data": false, 00:24:57.469 "copy": true, 00:24:57.469 "nvme_iov_md": false 00:24:57.469 }, 00:24:57.469 "memory_domains": [ 00:24:57.469 { 00:24:57.469 "dma_device_id": "system", 00:24:57.469 "dma_device_type": 1 00:24:57.469 }, 00:24:57.469 { 00:24:57.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.469 "dma_device_type": 2 00:24:57.469 } 00:24:57.469 ], 00:24:57.469 "driver_specific": { 00:24:57.469 "passthru": { 00:24:57.469 "name": "pt2", 00:24:57.469 "base_bdev_name": "malloc2" 00:24:57.469 } 00:24:57.469 } 00:24:57.469 }' 00:24:57.469 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:57.469 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:57.469 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:57.469 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:57.469 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:57.469 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:57.469 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:57.469 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:57.728 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:57.728 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:57.728 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:57.728 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:57.728 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:57.728 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:57.728 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:57.987 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:57.987 "name": "pt3", 00:24:57.987 "aliases": [ 00:24:57.987 "00000000-0000-0000-0000-000000000003" 00:24:57.987 ], 00:24:57.987 "product_name": "passthru", 00:24:57.987 "block_size": 512, 00:24:57.987 "num_blocks": 65536, 00:24:57.987 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:57.987 "assigned_rate_limits": { 00:24:57.987 "rw_ios_per_sec": 0, 00:24:57.987 "rw_mbytes_per_sec": 0, 00:24:57.987 "r_mbytes_per_sec": 0, 00:24:57.987 "w_mbytes_per_sec": 0 00:24:57.987 }, 00:24:57.987 "claimed": true, 00:24:57.987 "claim_type": "exclusive_write", 00:24:57.987 "zoned": false, 00:24:57.987 "supported_io_types": { 00:24:57.987 "read": true, 00:24:57.987 "write": true, 00:24:57.987 "unmap": true, 00:24:57.987 "flush": true, 00:24:57.987 "reset": true, 00:24:57.987 "nvme_admin": false, 00:24:57.987 "nvme_io": false, 00:24:57.987 "nvme_io_md": false, 00:24:57.987 "write_zeroes": true, 00:24:57.987 "zcopy": true, 00:24:57.987 "get_zone_info": false, 00:24:57.987 "zone_management": false, 00:24:57.987 "zone_append": false, 00:24:57.987 "compare": false, 00:24:57.987 "compare_and_write": false, 00:24:57.987 "abort": true, 00:24:57.987 "seek_hole": false, 00:24:57.987 "seek_data": false, 00:24:57.987 "copy": true, 00:24:57.987 "nvme_iov_md": false 00:24:57.987 }, 00:24:57.987 "memory_domains": [ 00:24:57.987 { 00:24:57.987 "dma_device_id": "system", 00:24:57.987 "dma_device_type": 1 00:24:57.987 }, 00:24:57.987 { 00:24:57.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.987 "dma_device_type": 2 00:24:57.987 } 00:24:57.987 ], 00:24:57.987 "driver_specific": { 00:24:57.987 "passthru": { 00:24:57.987 "name": "pt3", 00:24:57.987 "base_bdev_name": "malloc3" 00:24:57.987 } 00:24:57.987 } 00:24:57.987 }' 00:24:57.987 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:57.987 11:07:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:57.987 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:57.987 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:57.987 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.246 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:58.246 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.246 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.246 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:58.246 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.247 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.247 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:58.247 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:58.247 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:58.247 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:58.506 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:58.506 "name": "pt4", 00:24:58.506 "aliases": [ 00:24:58.506 "00000000-0000-0000-0000-000000000004" 00:24:58.506 ], 00:24:58.506 "product_name": "passthru", 00:24:58.506 "block_size": 512, 00:24:58.506 "num_blocks": 65536, 00:24:58.506 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:58.506 "assigned_rate_limits": { 00:24:58.506 "rw_ios_per_sec": 0, 00:24:58.506 "rw_mbytes_per_sec": 0, 00:24:58.506 "r_mbytes_per_sec": 0, 00:24:58.506 "w_mbytes_per_sec": 0 00:24:58.506 }, 00:24:58.506 "claimed": true, 00:24:58.506 "claim_type": "exclusive_write", 00:24:58.506 "zoned": false, 00:24:58.506 "supported_io_types": { 00:24:58.506 "read": true, 00:24:58.506 "write": true, 00:24:58.506 "unmap": true, 00:24:58.506 "flush": true, 00:24:58.506 "reset": true, 00:24:58.506 "nvme_admin": false, 00:24:58.506 "nvme_io": false, 00:24:58.506 "nvme_io_md": false, 00:24:58.506 "write_zeroes": true, 00:24:58.506 "zcopy": true, 00:24:58.506 "get_zone_info": false, 00:24:58.506 "zone_management": false, 00:24:58.506 "zone_append": false, 00:24:58.506 "compare": false, 00:24:58.506 "compare_and_write": false, 00:24:58.506 "abort": true, 00:24:58.506 "seek_hole": false, 00:24:58.506 "seek_data": false, 00:24:58.506 "copy": true, 00:24:58.506 "nvme_iov_md": false 00:24:58.506 }, 00:24:58.506 "memory_domains": [ 00:24:58.506 { 00:24:58.506 "dma_device_id": "system", 00:24:58.506 "dma_device_type": 1 00:24:58.506 }, 00:24:58.506 { 00:24:58.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.506 "dma_device_type": 2 00:24:58.506 } 00:24:58.506 ], 00:24:58.506 "driver_specific": { 00:24:58.506 "passthru": { 00:24:58.506 "name": "pt4", 00:24:58.506 "base_bdev_name": "malloc4" 00:24:58.506 } 00:24:58.506 } 00:24:58.506 }' 00:24:58.506 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:58.506 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:58.506 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:58.506 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.765 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.765 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:58.765 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.765 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.765 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:58.765 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.765 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.765 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:58.765 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:58.765 11:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:24:59.024 [2024-07-25 11:07:06.063512] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 94eabfd9-a88d-4b80-b94c-e93d4601d7bb '!=' 94eabfd9-a88d-4b80-b94c-e93d4601d7bb ']' 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy concat 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3667118 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3667118 ']' 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3667118 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3667118 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3667118' 00:24:59.024 killing process with pid 3667118 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3667118 00:24:59.024 [2024-07-25 11:07:06.141887] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:59.024 [2024-07-25 11:07:06.141981] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:59.024 11:07:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3667118 00:24:59.024 [2024-07-25 11:07:06.142068] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:59.024 [2024-07-25 11:07:06.142084] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:24:59.594 [2024-07-25 11:07:06.609264] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:01.498 11:07:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:25:01.498 00:25:01.498 real 0m18.342s 00:25:01.498 user 0m31.040s 00:25:01.498 sys 0m3.016s 00:25:01.498 11:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:01.498 11:07:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.498 ************************************ 00:25:01.498 END TEST raid_superblock_test 00:25:01.498 ************************************ 00:25:01.498 11:07:08 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:25:01.498 11:07:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:01.498 11:07:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:01.498 11:07:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:01.498 ************************************ 00:25:01.498 START TEST raid_read_error_test 00:25:01.498 ************************************ 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:25:01.498 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.JaQhlWsWTX 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3670589 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3670589 /var/tmp/spdk-raid.sock 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3670589 ']' 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:01.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:01.499 11:07:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:01.499 [2024-07-25 11:07:08.583641] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:01.499 [2024-07-25 11:07:08.583760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670589 ] 00:25:01.757 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.757 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:01.757 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.757 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:01.757 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:01.758 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:01.758 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:01.758 [2024-07-25 11:07:08.807470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.017 [2024-07-25 11:07:09.091751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.583 [2024-07-25 11:07:09.438933] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:02.583 [2024-07-25 11:07:09.438968] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:02.583 11:07:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:02.583 11:07:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:25:02.583 11:07:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:02.583 11:07:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:02.840 BaseBdev1_malloc 00:25:02.840 11:07:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:03.098 true 00:25:03.098 11:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:03.357 [2024-07-25 11:07:10.349857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:03.357 [2024-07-25 11:07:10.349918] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.357 [2024-07-25 11:07:10.349944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:25:03.357 [2024-07-25 11:07:10.349966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.357 [2024-07-25 11:07:10.352774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.357 [2024-07-25 11:07:10.352814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:03.357 BaseBdev1 00:25:03.357 11:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:03.357 11:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:03.616 BaseBdev2_malloc 00:25:03.616 11:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:03.875 true 00:25:03.875 11:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:04.133 [2024-07-25 11:07:11.085272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:04.133 [2024-07-25 11:07:11.085331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.133 [2024-07-25 11:07:11.085356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:25:04.133 [2024-07-25 11:07:11.085377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.133 [2024-07-25 11:07:11.088158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.133 [2024-07-25 11:07:11.088195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:04.133 BaseBdev2 00:25:04.133 11:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:04.133 11:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:04.393 BaseBdev3_malloc 00:25:04.393 11:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:04.651 true 00:25:04.651 11:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:04.910 [2024-07-25 11:07:11.798665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:04.910 [2024-07-25 11:07:11.798721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:04.910 [2024-07-25 11:07:11.798748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:25:04.910 [2024-07-25 11:07:11.798766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:04.910 [2024-07-25 11:07:11.801510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:04.910 [2024-07-25 11:07:11.801547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:04.910 BaseBdev3 00:25:04.910 11:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:04.910 11:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:05.168 BaseBdev4_malloc 00:25:05.168 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:05.427 true 00:25:05.427 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:05.427 [2024-07-25 11:07:12.513443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:05.427 [2024-07-25 11:07:12.513503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:05.427 [2024-07-25 11:07:12.513530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:25:05.427 [2024-07-25 11:07:12.513548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:05.427 [2024-07-25 11:07:12.516333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:05.427 [2024-07-25 11:07:12.516369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:05.427 BaseBdev4 00:25:05.427 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:05.686 [2024-07-25 11:07:12.742113] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:05.686 [2024-07-25 11:07:12.744480] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:05.686 [2024-07-25 11:07:12.744576] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:05.686 [2024-07-25 11:07:12.744655] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:05.686 [2024-07-25 11:07:12.744933] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:25:05.686 [2024-07-25 11:07:12.744953] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:05.686 [2024-07-25 11:07:12.745315] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:25:05.686 [2024-07-25 11:07:12.745585] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:25:05.686 [2024-07-25 11:07:12.745600] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:25:05.686 [2024-07-25 11:07:12.745836] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:05.686 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:05.686 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:05.687 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:05.687 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:05.687 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:05.687 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:05.687 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:05.687 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:05.687 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:05.687 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:05.687 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.687 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.945 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:05.945 "name": "raid_bdev1", 00:25:05.945 "uuid": "f2d878be-c656-41a3-91b9-530723fb7b7c", 00:25:05.945 "strip_size_kb": 64, 00:25:05.945 "state": "online", 00:25:05.945 "raid_level": "concat", 00:25:05.945 "superblock": true, 00:25:05.945 "num_base_bdevs": 4, 00:25:05.945 "num_base_bdevs_discovered": 4, 00:25:05.945 "num_base_bdevs_operational": 4, 00:25:05.945 "base_bdevs_list": [ 00:25:05.945 { 00:25:05.945 "name": "BaseBdev1", 00:25:05.945 "uuid": "8ab1dfd3-c2ae-5875-90ff-a3da1335fe50", 00:25:05.945 "is_configured": true, 00:25:05.945 "data_offset": 2048, 00:25:05.945 "data_size": 63488 00:25:05.945 }, 00:25:05.945 { 00:25:05.945 "name": "BaseBdev2", 00:25:05.945 "uuid": "4229552b-d9ce-503f-b0ea-8bcd4452168a", 00:25:05.945 "is_configured": true, 00:25:05.945 "data_offset": 2048, 00:25:05.945 "data_size": 63488 00:25:05.945 }, 00:25:05.945 { 00:25:05.945 "name": "BaseBdev3", 00:25:05.945 "uuid": "1d7e0043-98c7-566d-9a7a-b5ae48b497f2", 00:25:05.945 "is_configured": true, 00:25:05.945 "data_offset": 2048, 00:25:05.945 "data_size": 63488 00:25:05.945 }, 00:25:05.945 { 00:25:05.945 "name": "BaseBdev4", 00:25:05.945 "uuid": "8b4a75a8-5b32-5728-a056-69f05575ec1b", 00:25:05.945 "is_configured": true, 00:25:05.945 "data_offset": 2048, 00:25:05.945 "data_size": 63488 00:25:05.945 } 00:25:05.945 ] 00:25:05.945 }' 00:25:05.945 11:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:05.945 11:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:06.513 11:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:25:06.513 11:07:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:06.513 [2024-07-25 11:07:13.630505] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:25:07.458 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.738 11:07:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:07.997 11:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:07.997 "name": "raid_bdev1", 00:25:07.997 "uuid": "f2d878be-c656-41a3-91b9-530723fb7b7c", 00:25:07.997 "strip_size_kb": 64, 00:25:07.997 "state": "online", 00:25:07.997 "raid_level": "concat", 00:25:07.997 "superblock": true, 00:25:07.997 "num_base_bdevs": 4, 00:25:07.997 "num_base_bdevs_discovered": 4, 00:25:07.997 "num_base_bdevs_operational": 4, 00:25:07.997 "base_bdevs_list": [ 00:25:07.997 { 00:25:07.997 "name": "BaseBdev1", 00:25:07.997 "uuid": "8ab1dfd3-c2ae-5875-90ff-a3da1335fe50", 00:25:07.997 "is_configured": true, 00:25:07.997 "data_offset": 2048, 00:25:07.997 "data_size": 63488 00:25:07.997 }, 00:25:07.997 { 00:25:07.997 "name": "BaseBdev2", 00:25:07.997 "uuid": "4229552b-d9ce-503f-b0ea-8bcd4452168a", 00:25:07.997 "is_configured": true, 00:25:07.997 "data_offset": 2048, 00:25:07.997 "data_size": 63488 00:25:07.997 }, 00:25:07.997 { 00:25:07.997 "name": "BaseBdev3", 00:25:07.997 "uuid": "1d7e0043-98c7-566d-9a7a-b5ae48b497f2", 00:25:07.997 "is_configured": true, 00:25:07.997 "data_offset": 2048, 00:25:07.997 "data_size": 63488 00:25:07.997 }, 00:25:07.997 { 00:25:07.997 "name": "BaseBdev4", 00:25:07.997 "uuid": "8b4a75a8-5b32-5728-a056-69f05575ec1b", 00:25:07.997 "is_configured": true, 00:25:07.997 "data_offset": 2048, 00:25:07.997 "data_size": 63488 00:25:07.997 } 00:25:07.997 ] 00:25:07.997 }' 00:25:07.997 11:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:07.997 11:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:08.564 11:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:08.823 [2024-07-25 11:07:15.790329] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:08.823 [2024-07-25 11:07:15.790378] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:08.823 [2024-07-25 11:07:15.793916] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:08.823 [2024-07-25 11:07:15.793983] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.823 [2024-07-25 11:07:15.794036] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:08.823 [2024-07-25 11:07:15.794061] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:25:08.823 0 00:25:08.823 11:07:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3670589 00:25:08.823 11:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3670589 ']' 00:25:08.823 11:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3670589 00:25:08.823 11:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:25:08.823 11:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:08.823 11:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3670589 00:25:08.823 11:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:08.823 11:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:08.823 11:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3670589' 00:25:08.823 killing process with pid 3670589 00:25:08.823 11:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3670589 00:25:08.823 [2024-07-25 11:07:15.864608] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:08.823 11:07:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3670589 00:25:09.391 [2024-07-25 11:07:16.219404] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:11.294 11:07:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.JaQhlWsWTX 00:25:11.294 11:07:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:25:11.294 11:07:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:25:11.294 11:07:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.46 00:25:11.294 11:07:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:25:11.294 11:07:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:11.294 11:07:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:11.294 11:07:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.46 != \0\.\0\0 ]] 00:25:11.294 00:25:11.294 real 0m9.526s 00:25:11.294 user 0m13.698s 00:25:11.294 sys 0m1.405s 00:25:11.294 11:07:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:11.294 11:07:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.294 ************************************ 00:25:11.294 END TEST raid_read_error_test 00:25:11.294 ************************************ 00:25:11.294 11:07:18 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:25:11.294 11:07:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:11.294 11:07:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:11.294 11:07:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:11.294 ************************************ 00:25:11.294 START TEST raid_write_error_test 00:25:11.294 ************************************ 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=concat 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' concat '!=' raid1 ']' 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # strip_size=64 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # create_arg+=' -z 64' 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.tlvAzOKJQb 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3672270 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3672270 /var/tmp/spdk-raid.sock 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3672270 ']' 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:11.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:11.294 11:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:11.294 [2024-07-25 11:07:18.197123] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:11.294 [2024-07-25 11:07:18.197257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672270 ] 00:25:11.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.294 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:11.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.294 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:11.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.294 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:11.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.294 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:11.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.294 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:11.294 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.294 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:11.295 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:11.295 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:11.554 [2024-07-25 11:07:18.424336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.813 [2024-07-25 11:07:18.686314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.071 [2024-07-25 11:07:19.028321] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:12.071 [2024-07-25 11:07:19.028356] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:12.330 11:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:12.330 11:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:25:12.330 11:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:12.330 11:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:12.588 BaseBdev1_malloc 00:25:12.588 11:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:12.588 true 00:25:12.848 11:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:12.848 [2024-07-25 11:07:19.917727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:12.848 [2024-07-25 11:07:19.917785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:12.848 [2024-07-25 11:07:19.917813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:25:12.848 [2024-07-25 11:07:19.917836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:12.848 [2024-07-25 11:07:19.920602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:12.848 [2024-07-25 11:07:19.920641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:12.848 BaseBdev1 00:25:12.848 11:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:12.848 11:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:13.106 BaseBdev2_malloc 00:25:13.106 11:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:13.365 true 00:25:13.365 11:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:13.624 [2024-07-25 11:07:20.646654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:13.624 [2024-07-25 11:07:20.646715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:13.624 [2024-07-25 11:07:20.646743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:25:13.624 [2024-07-25 11:07:20.646764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:13.624 [2024-07-25 11:07:20.649517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:13.624 [2024-07-25 11:07:20.649556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:13.624 BaseBdev2 00:25:13.624 11:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:13.624 11:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:13.882 BaseBdev3_malloc 00:25:13.882 11:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:14.141 true 00:25:14.141 11:07:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:14.400 [2024-07-25 11:07:21.359481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:14.400 [2024-07-25 11:07:21.359539] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:14.400 [2024-07-25 11:07:21.359567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:25:14.400 [2024-07-25 11:07:21.359586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:14.400 [2024-07-25 11:07:21.362359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:14.400 [2024-07-25 11:07:21.362394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:14.400 BaseBdev3 00:25:14.400 11:07:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:25:14.400 11:07:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:14.658 BaseBdev4_malloc 00:25:14.658 11:07:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:14.917 true 00:25:14.917 11:07:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:15.176 [2024-07-25 11:07:22.076986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:15.176 [2024-07-25 11:07:22.077049] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:15.176 [2024-07-25 11:07:22.077078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:25:15.176 [2024-07-25 11:07:22.077097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:15.176 [2024-07-25 11:07:22.079899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:15.176 [2024-07-25 11:07:22.079937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:15.176 BaseBdev4 00:25:15.176 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:15.176 [2024-07-25 11:07:22.293615] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:15.435 [2024-07-25 11:07:22.296002] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:15.435 [2024-07-25 11:07:22.296097] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:15.436 [2024-07-25 11:07:22.296190] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:15.436 [2024-07-25 11:07:22.296456] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:25:15.436 [2024-07-25 11:07:22.296478] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:15.436 [2024-07-25 11:07:22.296832] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:25:15.436 [2024-07-25 11:07:22.297094] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:25:15.436 [2024-07-25 11:07:22.297109] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:25:15.436 [2024-07-25 11:07:22.297363] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:15.436 "name": "raid_bdev1", 00:25:15.436 "uuid": "e840a92f-c460-4d52-82c2-ac41dc9cd9ee", 00:25:15.436 "strip_size_kb": 64, 00:25:15.436 "state": "online", 00:25:15.436 "raid_level": "concat", 00:25:15.436 "superblock": true, 00:25:15.436 "num_base_bdevs": 4, 00:25:15.436 "num_base_bdevs_discovered": 4, 00:25:15.436 "num_base_bdevs_operational": 4, 00:25:15.436 "base_bdevs_list": [ 00:25:15.436 { 00:25:15.436 "name": "BaseBdev1", 00:25:15.436 "uuid": "ffac6c1a-3168-53da-bf73-d940e89bf2a3", 00:25:15.436 "is_configured": true, 00:25:15.436 "data_offset": 2048, 00:25:15.436 "data_size": 63488 00:25:15.436 }, 00:25:15.436 { 00:25:15.436 "name": "BaseBdev2", 00:25:15.436 "uuid": "ffbfedbd-1085-5ae9-a047-a768121cbc09", 00:25:15.436 "is_configured": true, 00:25:15.436 "data_offset": 2048, 00:25:15.436 "data_size": 63488 00:25:15.436 }, 00:25:15.436 { 00:25:15.436 "name": "BaseBdev3", 00:25:15.436 "uuid": "9b03d258-05c8-5fdd-882a-69d780da58df", 00:25:15.436 "is_configured": true, 00:25:15.436 "data_offset": 2048, 00:25:15.436 "data_size": 63488 00:25:15.436 }, 00:25:15.436 { 00:25:15.436 "name": "BaseBdev4", 00:25:15.436 "uuid": "e3b7f707-ba0b-5196-899f-a99092f54dab", 00:25:15.436 "is_configured": true, 00:25:15.436 "data_offset": 2048, 00:25:15.436 "data_size": 63488 00:25:15.436 } 00:25:15.436 ] 00:25:15.436 }' 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:15.436 11:07:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:16.004 11:07:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:25:16.004 11:07:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:16.263 [2024-07-25 11:07:23.194164] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:25:17.200 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ concat = \r\a\i\d\1 ]] 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:17.460 "name": "raid_bdev1", 00:25:17.460 "uuid": "e840a92f-c460-4d52-82c2-ac41dc9cd9ee", 00:25:17.460 "strip_size_kb": 64, 00:25:17.460 "state": "online", 00:25:17.460 "raid_level": "concat", 00:25:17.460 "superblock": true, 00:25:17.460 "num_base_bdevs": 4, 00:25:17.460 "num_base_bdevs_discovered": 4, 00:25:17.460 "num_base_bdevs_operational": 4, 00:25:17.460 "base_bdevs_list": [ 00:25:17.460 { 00:25:17.460 "name": "BaseBdev1", 00:25:17.460 "uuid": "ffac6c1a-3168-53da-bf73-d940e89bf2a3", 00:25:17.460 "is_configured": true, 00:25:17.460 "data_offset": 2048, 00:25:17.460 "data_size": 63488 00:25:17.460 }, 00:25:17.460 { 00:25:17.460 "name": "BaseBdev2", 00:25:17.460 "uuid": "ffbfedbd-1085-5ae9-a047-a768121cbc09", 00:25:17.460 "is_configured": true, 00:25:17.460 "data_offset": 2048, 00:25:17.460 "data_size": 63488 00:25:17.460 }, 00:25:17.460 { 00:25:17.460 "name": "BaseBdev3", 00:25:17.460 "uuid": "9b03d258-05c8-5fdd-882a-69d780da58df", 00:25:17.460 "is_configured": true, 00:25:17.460 "data_offset": 2048, 00:25:17.460 "data_size": 63488 00:25:17.460 }, 00:25:17.460 { 00:25:17.460 "name": "BaseBdev4", 00:25:17.460 "uuid": "e3b7f707-ba0b-5196-899f-a99092f54dab", 00:25:17.460 "is_configured": true, 00:25:17.460 "data_offset": 2048, 00:25:17.460 "data_size": 63488 00:25:17.460 } 00:25:17.460 ] 00:25:17.460 }' 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:17.460 11:07:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:18.028 11:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:18.288 [2024-07-25 11:07:25.350268] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:18.288 [2024-07-25 11:07:25.350308] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:18.288 [2024-07-25 11:07:25.353551] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:18.288 [2024-07-25 11:07:25.353610] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:18.288 [2024-07-25 11:07:25.353663] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:18.288 [2024-07-25 11:07:25.353690] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:25:18.288 0 00:25:18.288 11:07:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3672270 00:25:18.288 11:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3672270 ']' 00:25:18.288 11:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3672270 00:25:18.288 11:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:25:18.288 11:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:18.288 11:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3672270 00:25:18.547 11:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:18.547 11:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:18.547 11:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3672270' 00:25:18.547 killing process with pid 3672270 00:25:18.547 11:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3672270 00:25:18.547 [2024-07-25 11:07:25.421749] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:18.547 11:07:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3672270 00:25:18.806 [2024-07-25 11:07:25.781263] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:20.710 11:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.tlvAzOKJQb 00:25:20.710 11:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:25:20.710 11:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:25:20.710 11:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.47 00:25:20.710 11:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy concat 00:25:20.710 11:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:20.710 11:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:20.710 11:07:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@863 -- # [[ 0.47 != \0\.\0\0 ]] 00:25:20.710 00:25:20.710 real 0m9.464s 00:25:20.710 user 0m13.619s 00:25:20.710 sys 0m1.396s 00:25:20.710 11:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:20.710 11:07:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.710 ************************************ 00:25:20.710 END TEST raid_write_error_test 00:25:20.710 ************************************ 00:25:20.710 11:07:27 bdev_raid -- bdev/bdev_raid.sh@946 -- # for level in raid0 concat raid1 00:25:20.710 11:07:27 bdev_raid -- bdev/bdev_raid.sh@947 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:25:20.710 11:07:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:20.710 11:07:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:20.710 11:07:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:20.710 ************************************ 00:25:20.710 START TEST raid_state_function_test 00:25:20.710 ************************************ 00:25:20.710 11:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:25:20.710 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:20.710 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:20.710 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:25:20.710 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:20.710 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:20.710 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:20.710 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:20.710 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=3673952 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3673952' 00:25:20.711 Process raid pid: 3673952 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 3673952 /var/tmp/spdk-raid.sock 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 3673952 ']' 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:20.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:20.711 11:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:20.711 [2024-07-25 11:07:27.743032] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:20.711 [2024-07-25 11:07:27.743175] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:20.971 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:20.971 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:20.971 [2024-07-25 11:07:27.967641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.230 [2024-07-25 11:07:28.232801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.489 [2024-07-25 11:07:28.567523] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:21.489 [2024-07-25 11:07:28.567558] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:21.789 11:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:21.789 11:07:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:25:21.789 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:22.054 [2024-07-25 11:07:28.950502] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:22.054 [2024-07-25 11:07:28.950557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:22.054 [2024-07-25 11:07:28.950572] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:22.054 [2024-07-25 11:07:28.950588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:22.054 [2024-07-25 11:07:28.950600] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:22.054 [2024-07-25 11:07:28.950615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:22.054 [2024-07-25 11:07:28.950626] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:22.054 [2024-07-25 11:07:28.950642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:22.054 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:22.054 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:22.054 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:22.054 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:22.054 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:22.054 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:22.054 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.054 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.054 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.054 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.054 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.054 11:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.313 11:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.313 "name": "Existed_Raid", 00:25:22.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.313 "strip_size_kb": 0, 00:25:22.313 "state": "configuring", 00:25:22.313 "raid_level": "raid1", 00:25:22.313 "superblock": false, 00:25:22.313 "num_base_bdevs": 4, 00:25:22.313 "num_base_bdevs_discovered": 0, 00:25:22.313 "num_base_bdevs_operational": 4, 00:25:22.313 "base_bdevs_list": [ 00:25:22.313 { 00:25:22.313 "name": "BaseBdev1", 00:25:22.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.313 "is_configured": false, 00:25:22.313 "data_offset": 0, 00:25:22.313 "data_size": 0 00:25:22.313 }, 00:25:22.313 { 00:25:22.313 "name": "BaseBdev2", 00:25:22.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.313 "is_configured": false, 00:25:22.313 "data_offset": 0, 00:25:22.313 "data_size": 0 00:25:22.313 }, 00:25:22.313 { 00:25:22.313 "name": "BaseBdev3", 00:25:22.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.313 "is_configured": false, 00:25:22.313 "data_offset": 0, 00:25:22.313 "data_size": 0 00:25:22.313 }, 00:25:22.313 { 00:25:22.313 "name": "BaseBdev4", 00:25:22.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:22.313 "is_configured": false, 00:25:22.313 "data_offset": 0, 00:25:22.313 "data_size": 0 00:25:22.313 } 00:25:22.313 ] 00:25:22.313 }' 00:25:22.313 11:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.313 11:07:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.880 11:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:22.880 [2024-07-25 11:07:29.916954] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:22.880 [2024-07-25 11:07:29.916997] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:22.880 11:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:23.139 [2024-07-25 11:07:30.113551] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:23.139 [2024-07-25 11:07:30.113598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:23.139 [2024-07-25 11:07:30.113612] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:23.139 [2024-07-25 11:07:30.113636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:23.139 [2024-07-25 11:07:30.113648] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:23.139 [2024-07-25 11:07:30.113665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:23.139 [2024-07-25 11:07:30.113677] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:23.139 [2024-07-25 11:07:30.113692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:23.139 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:23.397 [2024-07-25 11:07:30.398459] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:23.397 BaseBdev1 00:25:23.397 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:23.397 11:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:23.397 11:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:23.397 11:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:23.397 11:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:23.397 11:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:23.397 11:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:23.656 11:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:23.916 [ 00:25:23.916 { 00:25:23.916 "name": "BaseBdev1", 00:25:23.916 "aliases": [ 00:25:23.916 "e8c124af-2403-4d0a-8388-77705b1bdbae" 00:25:23.916 ], 00:25:23.916 "product_name": "Malloc disk", 00:25:23.916 "block_size": 512, 00:25:23.916 "num_blocks": 65536, 00:25:23.916 "uuid": "e8c124af-2403-4d0a-8388-77705b1bdbae", 00:25:23.916 "assigned_rate_limits": { 00:25:23.916 "rw_ios_per_sec": 0, 00:25:23.916 "rw_mbytes_per_sec": 0, 00:25:23.916 "r_mbytes_per_sec": 0, 00:25:23.916 "w_mbytes_per_sec": 0 00:25:23.916 }, 00:25:23.916 "claimed": true, 00:25:23.916 "claim_type": "exclusive_write", 00:25:23.916 "zoned": false, 00:25:23.916 "supported_io_types": { 00:25:23.916 "read": true, 00:25:23.916 "write": true, 00:25:23.916 "unmap": true, 00:25:23.916 "flush": true, 00:25:23.916 "reset": true, 00:25:23.916 "nvme_admin": false, 00:25:23.916 "nvme_io": false, 00:25:23.916 "nvme_io_md": false, 00:25:23.916 "write_zeroes": true, 00:25:23.916 "zcopy": true, 00:25:23.916 "get_zone_info": false, 00:25:23.916 "zone_management": false, 00:25:23.916 "zone_append": false, 00:25:23.916 "compare": false, 00:25:23.916 "compare_and_write": false, 00:25:23.916 "abort": true, 00:25:23.916 "seek_hole": false, 00:25:23.916 "seek_data": false, 00:25:23.916 "copy": true, 00:25:23.916 "nvme_iov_md": false 00:25:23.916 }, 00:25:23.916 "memory_domains": [ 00:25:23.916 { 00:25:23.916 "dma_device_id": "system", 00:25:23.916 "dma_device_type": 1 00:25:23.916 }, 00:25:23.916 { 00:25:23.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:23.916 "dma_device_type": 2 00:25:23.916 } 00:25:23.916 ], 00:25:23.916 "driver_specific": {} 00:25:23.916 } 00:25:23.916 ] 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:23.916 "name": "Existed_Raid", 00:25:23.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.916 "strip_size_kb": 0, 00:25:23.916 "state": "configuring", 00:25:23.916 "raid_level": "raid1", 00:25:23.916 "superblock": false, 00:25:23.916 "num_base_bdevs": 4, 00:25:23.916 "num_base_bdevs_discovered": 1, 00:25:23.916 "num_base_bdevs_operational": 4, 00:25:23.916 "base_bdevs_list": [ 00:25:23.916 { 00:25:23.916 "name": "BaseBdev1", 00:25:23.916 "uuid": "e8c124af-2403-4d0a-8388-77705b1bdbae", 00:25:23.916 "is_configured": true, 00:25:23.916 "data_offset": 0, 00:25:23.916 "data_size": 65536 00:25:23.916 }, 00:25:23.916 { 00:25:23.916 "name": "BaseBdev2", 00:25:23.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.916 "is_configured": false, 00:25:23.916 "data_offset": 0, 00:25:23.916 "data_size": 0 00:25:23.916 }, 00:25:23.916 { 00:25:23.916 "name": "BaseBdev3", 00:25:23.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.916 "is_configured": false, 00:25:23.916 "data_offset": 0, 00:25:23.916 "data_size": 0 00:25:23.916 }, 00:25:23.916 { 00:25:23.916 "name": "BaseBdev4", 00:25:23.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:23.916 "is_configured": false, 00:25:23.916 "data_offset": 0, 00:25:23.916 "data_size": 0 00:25:23.916 } 00:25:23.916 ] 00:25:23.916 }' 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:23.916 11:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:24.484 11:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:24.742 [2024-07-25 11:07:31.762172] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:24.742 [2024-07-25 11:07:31.762227] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:24.742 11:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:25.001 [2024-07-25 11:07:31.990861] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:25.001 [2024-07-25 11:07:31.993192] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:25.001 [2024-07-25 11:07:31.993246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:25.001 [2024-07-25 11:07:31.993260] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:25.001 [2024-07-25 11:07:31.993276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:25.001 [2024-07-25 11:07:31.993288] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:25.001 [2024-07-25 11:07:31.993307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.001 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.260 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.260 "name": "Existed_Raid", 00:25:25.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.260 "strip_size_kb": 0, 00:25:25.260 "state": "configuring", 00:25:25.260 "raid_level": "raid1", 00:25:25.260 "superblock": false, 00:25:25.260 "num_base_bdevs": 4, 00:25:25.260 "num_base_bdevs_discovered": 1, 00:25:25.260 "num_base_bdevs_operational": 4, 00:25:25.260 "base_bdevs_list": [ 00:25:25.260 { 00:25:25.260 "name": "BaseBdev1", 00:25:25.260 "uuid": "e8c124af-2403-4d0a-8388-77705b1bdbae", 00:25:25.260 "is_configured": true, 00:25:25.260 "data_offset": 0, 00:25:25.260 "data_size": 65536 00:25:25.260 }, 00:25:25.260 { 00:25:25.260 "name": "BaseBdev2", 00:25:25.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.260 "is_configured": false, 00:25:25.260 "data_offset": 0, 00:25:25.260 "data_size": 0 00:25:25.260 }, 00:25:25.260 { 00:25:25.260 "name": "BaseBdev3", 00:25:25.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.260 "is_configured": false, 00:25:25.260 "data_offset": 0, 00:25:25.260 "data_size": 0 00:25:25.260 }, 00:25:25.260 { 00:25:25.260 "name": "BaseBdev4", 00:25:25.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:25.260 "is_configured": false, 00:25:25.260 "data_offset": 0, 00:25:25.260 "data_size": 0 00:25:25.260 } 00:25:25.260 ] 00:25:25.260 }' 00:25:25.260 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.260 11:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.827 11:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:26.086 [2024-07-25 11:07:33.075135] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:26.086 BaseBdev2 00:25:26.086 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:26.086 11:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:26.086 11:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:26.086 11:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:26.086 11:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:26.086 11:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:26.086 11:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:26.345 11:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:26.345 [ 00:25:26.345 { 00:25:26.345 "name": "BaseBdev2", 00:25:26.345 "aliases": [ 00:25:26.345 "6e4cec30-1a84-47c1-a1bd-3f4a9b3ea373" 00:25:26.345 ], 00:25:26.345 "product_name": "Malloc disk", 00:25:26.345 "block_size": 512, 00:25:26.345 "num_blocks": 65536, 00:25:26.345 "uuid": "6e4cec30-1a84-47c1-a1bd-3f4a9b3ea373", 00:25:26.345 "assigned_rate_limits": { 00:25:26.345 "rw_ios_per_sec": 0, 00:25:26.345 "rw_mbytes_per_sec": 0, 00:25:26.345 "r_mbytes_per_sec": 0, 00:25:26.345 "w_mbytes_per_sec": 0 00:25:26.345 }, 00:25:26.345 "claimed": true, 00:25:26.345 "claim_type": "exclusive_write", 00:25:26.345 "zoned": false, 00:25:26.345 "supported_io_types": { 00:25:26.345 "read": true, 00:25:26.345 "write": true, 00:25:26.345 "unmap": true, 00:25:26.345 "flush": true, 00:25:26.345 "reset": true, 00:25:26.345 "nvme_admin": false, 00:25:26.345 "nvme_io": false, 00:25:26.345 "nvme_io_md": false, 00:25:26.345 "write_zeroes": true, 00:25:26.345 "zcopy": true, 00:25:26.345 "get_zone_info": false, 00:25:26.345 "zone_management": false, 00:25:26.345 "zone_append": false, 00:25:26.346 "compare": false, 00:25:26.346 "compare_and_write": false, 00:25:26.346 "abort": true, 00:25:26.346 "seek_hole": false, 00:25:26.346 "seek_data": false, 00:25:26.346 "copy": true, 00:25:26.346 "nvme_iov_md": false 00:25:26.346 }, 00:25:26.346 "memory_domains": [ 00:25:26.346 { 00:25:26.346 "dma_device_id": "system", 00:25:26.346 "dma_device_type": 1 00:25:26.346 }, 00:25:26.346 { 00:25:26.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:26.346 "dma_device_type": 2 00:25:26.346 } 00:25:26.346 ], 00:25:26.346 "driver_specific": {} 00:25:26.346 } 00:25:26.346 ] 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.346 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:26.605 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.605 "name": "Existed_Raid", 00:25:26.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.605 "strip_size_kb": 0, 00:25:26.605 "state": "configuring", 00:25:26.605 "raid_level": "raid1", 00:25:26.605 "superblock": false, 00:25:26.605 "num_base_bdevs": 4, 00:25:26.605 "num_base_bdevs_discovered": 2, 00:25:26.605 "num_base_bdevs_operational": 4, 00:25:26.605 "base_bdevs_list": [ 00:25:26.605 { 00:25:26.605 "name": "BaseBdev1", 00:25:26.605 "uuid": "e8c124af-2403-4d0a-8388-77705b1bdbae", 00:25:26.605 "is_configured": true, 00:25:26.605 "data_offset": 0, 00:25:26.605 "data_size": 65536 00:25:26.605 }, 00:25:26.605 { 00:25:26.605 "name": "BaseBdev2", 00:25:26.605 "uuid": "6e4cec30-1a84-47c1-a1bd-3f4a9b3ea373", 00:25:26.605 "is_configured": true, 00:25:26.605 "data_offset": 0, 00:25:26.605 "data_size": 65536 00:25:26.605 }, 00:25:26.605 { 00:25:26.605 "name": "BaseBdev3", 00:25:26.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.605 "is_configured": false, 00:25:26.605 "data_offset": 0, 00:25:26.605 "data_size": 0 00:25:26.605 }, 00:25:26.605 { 00:25:26.605 "name": "BaseBdev4", 00:25:26.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:26.605 "is_configured": false, 00:25:26.605 "data_offset": 0, 00:25:26.605 "data_size": 0 00:25:26.605 } 00:25:26.605 ] 00:25:26.605 }' 00:25:26.605 11:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.605 11:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.171 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:27.430 [2024-07-25 11:07:34.504421] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:27.430 BaseBdev3 00:25:27.430 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:27.430 11:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:27.430 11:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:27.430 11:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:27.430 11:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:27.430 11:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:27.430 11:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:27.689 11:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:27.948 [ 00:25:27.948 { 00:25:27.948 "name": "BaseBdev3", 00:25:27.948 "aliases": [ 00:25:27.948 "265ec6dd-c907-4e54-9be3-8e0a53459f17" 00:25:27.948 ], 00:25:27.948 "product_name": "Malloc disk", 00:25:27.948 "block_size": 512, 00:25:27.948 "num_blocks": 65536, 00:25:27.948 "uuid": "265ec6dd-c907-4e54-9be3-8e0a53459f17", 00:25:27.948 "assigned_rate_limits": { 00:25:27.948 "rw_ios_per_sec": 0, 00:25:27.948 "rw_mbytes_per_sec": 0, 00:25:27.948 "r_mbytes_per_sec": 0, 00:25:27.948 "w_mbytes_per_sec": 0 00:25:27.948 }, 00:25:27.948 "claimed": true, 00:25:27.948 "claim_type": "exclusive_write", 00:25:27.948 "zoned": false, 00:25:27.948 "supported_io_types": { 00:25:27.948 "read": true, 00:25:27.948 "write": true, 00:25:27.948 "unmap": true, 00:25:27.948 "flush": true, 00:25:27.948 "reset": true, 00:25:27.948 "nvme_admin": false, 00:25:27.948 "nvme_io": false, 00:25:27.948 "nvme_io_md": false, 00:25:27.948 "write_zeroes": true, 00:25:27.948 "zcopy": true, 00:25:27.948 "get_zone_info": false, 00:25:27.948 "zone_management": false, 00:25:27.948 "zone_append": false, 00:25:27.948 "compare": false, 00:25:27.948 "compare_and_write": false, 00:25:27.948 "abort": true, 00:25:27.948 "seek_hole": false, 00:25:27.948 "seek_data": false, 00:25:27.948 "copy": true, 00:25:27.948 "nvme_iov_md": false 00:25:27.948 }, 00:25:27.948 "memory_domains": [ 00:25:27.948 { 00:25:27.948 "dma_device_id": "system", 00:25:27.948 "dma_device_type": 1 00:25:27.948 }, 00:25:27.948 { 00:25:27.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:27.948 "dma_device_type": 2 00:25:27.948 } 00:25:27.948 ], 00:25:27.948 "driver_specific": {} 00:25:27.948 } 00:25:27.948 ] 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.948 11:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.207 11:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:28.207 "name": "Existed_Raid", 00:25:28.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.207 "strip_size_kb": 0, 00:25:28.207 "state": "configuring", 00:25:28.207 "raid_level": "raid1", 00:25:28.207 "superblock": false, 00:25:28.207 "num_base_bdevs": 4, 00:25:28.207 "num_base_bdevs_discovered": 3, 00:25:28.207 "num_base_bdevs_operational": 4, 00:25:28.207 "base_bdevs_list": [ 00:25:28.207 { 00:25:28.207 "name": "BaseBdev1", 00:25:28.207 "uuid": "e8c124af-2403-4d0a-8388-77705b1bdbae", 00:25:28.207 "is_configured": true, 00:25:28.207 "data_offset": 0, 00:25:28.207 "data_size": 65536 00:25:28.207 }, 00:25:28.207 { 00:25:28.207 "name": "BaseBdev2", 00:25:28.207 "uuid": "6e4cec30-1a84-47c1-a1bd-3f4a9b3ea373", 00:25:28.207 "is_configured": true, 00:25:28.207 "data_offset": 0, 00:25:28.207 "data_size": 65536 00:25:28.207 }, 00:25:28.207 { 00:25:28.207 "name": "BaseBdev3", 00:25:28.207 "uuid": "265ec6dd-c907-4e54-9be3-8e0a53459f17", 00:25:28.207 "is_configured": true, 00:25:28.207 "data_offset": 0, 00:25:28.207 "data_size": 65536 00:25:28.207 }, 00:25:28.207 { 00:25:28.207 "name": "BaseBdev4", 00:25:28.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.207 "is_configured": false, 00:25:28.207 "data_offset": 0, 00:25:28.207 "data_size": 0 00:25:28.207 } 00:25:28.207 ] 00:25:28.207 }' 00:25:28.207 11:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:28.207 11:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:28.773 11:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:29.033 [2024-07-25 11:07:35.977857] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:29.033 [2024-07-25 11:07:35.977910] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:25:29.033 [2024-07-25 11:07:35.977926] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:29.033 [2024-07-25 11:07:35.978268] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:25:29.033 [2024-07-25 11:07:35.978535] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:25:29.033 [2024-07-25 11:07:35.978554] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:25:29.033 [2024-07-25 11:07:35.978865] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.033 BaseBdev4 00:25:29.033 11:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:29.033 11:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:25:29.033 11:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:29.033 11:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:29.033 11:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:29.033 11:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:29.033 11:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:29.293 [ 00:25:29.293 { 00:25:29.293 "name": "BaseBdev4", 00:25:29.293 "aliases": [ 00:25:29.293 "f6fb7f3a-1d2e-4027-bd01-cb9230b35080" 00:25:29.293 ], 00:25:29.293 "product_name": "Malloc disk", 00:25:29.293 "block_size": 512, 00:25:29.293 "num_blocks": 65536, 00:25:29.293 "uuid": "f6fb7f3a-1d2e-4027-bd01-cb9230b35080", 00:25:29.293 "assigned_rate_limits": { 00:25:29.293 "rw_ios_per_sec": 0, 00:25:29.293 "rw_mbytes_per_sec": 0, 00:25:29.293 "r_mbytes_per_sec": 0, 00:25:29.293 "w_mbytes_per_sec": 0 00:25:29.293 }, 00:25:29.293 "claimed": true, 00:25:29.293 "claim_type": "exclusive_write", 00:25:29.293 "zoned": false, 00:25:29.293 "supported_io_types": { 00:25:29.293 "read": true, 00:25:29.293 "write": true, 00:25:29.293 "unmap": true, 00:25:29.293 "flush": true, 00:25:29.293 "reset": true, 00:25:29.293 "nvme_admin": false, 00:25:29.293 "nvme_io": false, 00:25:29.293 "nvme_io_md": false, 00:25:29.293 "write_zeroes": true, 00:25:29.293 "zcopy": true, 00:25:29.293 "get_zone_info": false, 00:25:29.293 "zone_management": false, 00:25:29.293 "zone_append": false, 00:25:29.293 "compare": false, 00:25:29.293 "compare_and_write": false, 00:25:29.293 "abort": true, 00:25:29.293 "seek_hole": false, 00:25:29.293 "seek_data": false, 00:25:29.293 "copy": true, 00:25:29.293 "nvme_iov_md": false 00:25:29.293 }, 00:25:29.293 "memory_domains": [ 00:25:29.293 { 00:25:29.293 "dma_device_id": "system", 00:25:29.293 "dma_device_type": 1 00:25:29.293 }, 00:25:29.293 { 00:25:29.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.293 "dma_device_type": 2 00:25:29.293 } 00:25:29.293 ], 00:25:29.293 "driver_specific": {} 00:25:29.293 } 00:25:29.293 ] 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.293 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:29.551 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:29.551 "name": "Existed_Raid", 00:25:29.551 "uuid": "bf7561af-7e85-43c5-86e4-b01095c3cf8f", 00:25:29.551 "strip_size_kb": 0, 00:25:29.551 "state": "online", 00:25:29.551 "raid_level": "raid1", 00:25:29.551 "superblock": false, 00:25:29.551 "num_base_bdevs": 4, 00:25:29.551 "num_base_bdevs_discovered": 4, 00:25:29.551 "num_base_bdevs_operational": 4, 00:25:29.551 "base_bdevs_list": [ 00:25:29.551 { 00:25:29.551 "name": "BaseBdev1", 00:25:29.551 "uuid": "e8c124af-2403-4d0a-8388-77705b1bdbae", 00:25:29.551 "is_configured": true, 00:25:29.551 "data_offset": 0, 00:25:29.551 "data_size": 65536 00:25:29.551 }, 00:25:29.551 { 00:25:29.551 "name": "BaseBdev2", 00:25:29.551 "uuid": "6e4cec30-1a84-47c1-a1bd-3f4a9b3ea373", 00:25:29.551 "is_configured": true, 00:25:29.551 "data_offset": 0, 00:25:29.551 "data_size": 65536 00:25:29.551 }, 00:25:29.551 { 00:25:29.551 "name": "BaseBdev3", 00:25:29.551 "uuid": "265ec6dd-c907-4e54-9be3-8e0a53459f17", 00:25:29.551 "is_configured": true, 00:25:29.551 "data_offset": 0, 00:25:29.551 "data_size": 65536 00:25:29.551 }, 00:25:29.551 { 00:25:29.551 "name": "BaseBdev4", 00:25:29.551 "uuid": "f6fb7f3a-1d2e-4027-bd01-cb9230b35080", 00:25:29.551 "is_configured": true, 00:25:29.551 "data_offset": 0, 00:25:29.551 "data_size": 65536 00:25:29.551 } 00:25:29.551 ] 00:25:29.551 }' 00:25:29.551 11:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:29.551 11:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:30.119 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:30.119 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:30.119 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:30.119 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:30.119 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:30.119 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:30.119 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:30.119 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:30.378 [2024-07-25 11:07:37.362111] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:30.378 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:30.378 "name": "Existed_Raid", 00:25:30.378 "aliases": [ 00:25:30.378 "bf7561af-7e85-43c5-86e4-b01095c3cf8f" 00:25:30.378 ], 00:25:30.378 "product_name": "Raid Volume", 00:25:30.378 "block_size": 512, 00:25:30.378 "num_blocks": 65536, 00:25:30.378 "uuid": "bf7561af-7e85-43c5-86e4-b01095c3cf8f", 00:25:30.378 "assigned_rate_limits": { 00:25:30.378 "rw_ios_per_sec": 0, 00:25:30.378 "rw_mbytes_per_sec": 0, 00:25:30.378 "r_mbytes_per_sec": 0, 00:25:30.378 "w_mbytes_per_sec": 0 00:25:30.378 }, 00:25:30.378 "claimed": false, 00:25:30.378 "zoned": false, 00:25:30.378 "supported_io_types": { 00:25:30.378 "read": true, 00:25:30.378 "write": true, 00:25:30.378 "unmap": false, 00:25:30.378 "flush": false, 00:25:30.378 "reset": true, 00:25:30.378 "nvme_admin": false, 00:25:30.378 "nvme_io": false, 00:25:30.378 "nvme_io_md": false, 00:25:30.378 "write_zeroes": true, 00:25:30.378 "zcopy": false, 00:25:30.378 "get_zone_info": false, 00:25:30.378 "zone_management": false, 00:25:30.378 "zone_append": false, 00:25:30.378 "compare": false, 00:25:30.378 "compare_and_write": false, 00:25:30.378 "abort": false, 00:25:30.378 "seek_hole": false, 00:25:30.378 "seek_data": false, 00:25:30.378 "copy": false, 00:25:30.378 "nvme_iov_md": false 00:25:30.378 }, 00:25:30.378 "memory_domains": [ 00:25:30.378 { 00:25:30.378 "dma_device_id": "system", 00:25:30.378 "dma_device_type": 1 00:25:30.378 }, 00:25:30.378 { 00:25:30.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.378 "dma_device_type": 2 00:25:30.378 }, 00:25:30.378 { 00:25:30.378 "dma_device_id": "system", 00:25:30.378 "dma_device_type": 1 00:25:30.378 }, 00:25:30.378 { 00:25:30.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.378 "dma_device_type": 2 00:25:30.378 }, 00:25:30.378 { 00:25:30.378 "dma_device_id": "system", 00:25:30.378 "dma_device_type": 1 00:25:30.378 }, 00:25:30.378 { 00:25:30.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.378 "dma_device_type": 2 00:25:30.378 }, 00:25:30.378 { 00:25:30.378 "dma_device_id": "system", 00:25:30.378 "dma_device_type": 1 00:25:30.378 }, 00:25:30.378 { 00:25:30.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.378 "dma_device_type": 2 00:25:30.378 } 00:25:30.378 ], 00:25:30.378 "driver_specific": { 00:25:30.378 "raid": { 00:25:30.378 "uuid": "bf7561af-7e85-43c5-86e4-b01095c3cf8f", 00:25:30.378 "strip_size_kb": 0, 00:25:30.378 "state": "online", 00:25:30.378 "raid_level": "raid1", 00:25:30.378 "superblock": false, 00:25:30.378 "num_base_bdevs": 4, 00:25:30.379 "num_base_bdevs_discovered": 4, 00:25:30.379 "num_base_bdevs_operational": 4, 00:25:30.379 "base_bdevs_list": [ 00:25:30.379 { 00:25:30.379 "name": "BaseBdev1", 00:25:30.379 "uuid": "e8c124af-2403-4d0a-8388-77705b1bdbae", 00:25:30.379 "is_configured": true, 00:25:30.379 "data_offset": 0, 00:25:30.379 "data_size": 65536 00:25:30.379 }, 00:25:30.379 { 00:25:30.379 "name": "BaseBdev2", 00:25:30.379 "uuid": "6e4cec30-1a84-47c1-a1bd-3f4a9b3ea373", 00:25:30.379 "is_configured": true, 00:25:30.379 "data_offset": 0, 00:25:30.379 "data_size": 65536 00:25:30.379 }, 00:25:30.379 { 00:25:30.379 "name": "BaseBdev3", 00:25:30.379 "uuid": "265ec6dd-c907-4e54-9be3-8e0a53459f17", 00:25:30.379 "is_configured": true, 00:25:30.379 "data_offset": 0, 00:25:30.379 "data_size": 65536 00:25:30.379 }, 00:25:30.379 { 00:25:30.379 "name": "BaseBdev4", 00:25:30.379 "uuid": "f6fb7f3a-1d2e-4027-bd01-cb9230b35080", 00:25:30.379 "is_configured": true, 00:25:30.379 "data_offset": 0, 00:25:30.379 "data_size": 65536 00:25:30.379 } 00:25:30.379 ] 00:25:30.379 } 00:25:30.379 } 00:25:30.379 }' 00:25:30.379 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:30.379 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:30.379 BaseBdev2 00:25:30.379 BaseBdev3 00:25:30.379 BaseBdev4' 00:25:30.379 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:30.379 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:30.379 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:30.645 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:30.645 "name": "BaseBdev1", 00:25:30.645 "aliases": [ 00:25:30.645 "e8c124af-2403-4d0a-8388-77705b1bdbae" 00:25:30.645 ], 00:25:30.645 "product_name": "Malloc disk", 00:25:30.645 "block_size": 512, 00:25:30.645 "num_blocks": 65536, 00:25:30.645 "uuid": "e8c124af-2403-4d0a-8388-77705b1bdbae", 00:25:30.645 "assigned_rate_limits": { 00:25:30.645 "rw_ios_per_sec": 0, 00:25:30.645 "rw_mbytes_per_sec": 0, 00:25:30.645 "r_mbytes_per_sec": 0, 00:25:30.645 "w_mbytes_per_sec": 0 00:25:30.645 }, 00:25:30.645 "claimed": true, 00:25:30.645 "claim_type": "exclusive_write", 00:25:30.645 "zoned": false, 00:25:30.645 "supported_io_types": { 00:25:30.645 "read": true, 00:25:30.645 "write": true, 00:25:30.645 "unmap": true, 00:25:30.645 "flush": true, 00:25:30.645 "reset": true, 00:25:30.645 "nvme_admin": false, 00:25:30.645 "nvme_io": false, 00:25:30.645 "nvme_io_md": false, 00:25:30.645 "write_zeroes": true, 00:25:30.645 "zcopy": true, 00:25:30.645 "get_zone_info": false, 00:25:30.645 "zone_management": false, 00:25:30.645 "zone_append": false, 00:25:30.645 "compare": false, 00:25:30.645 "compare_and_write": false, 00:25:30.645 "abort": true, 00:25:30.645 "seek_hole": false, 00:25:30.645 "seek_data": false, 00:25:30.645 "copy": true, 00:25:30.645 "nvme_iov_md": false 00:25:30.645 }, 00:25:30.645 "memory_domains": [ 00:25:30.645 { 00:25:30.645 "dma_device_id": "system", 00:25:30.645 "dma_device_type": 1 00:25:30.645 }, 00:25:30.645 { 00:25:30.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.645 "dma_device_type": 2 00:25:30.645 } 00:25:30.645 ], 00:25:30.645 "driver_specific": {} 00:25:30.645 }' 00:25:30.645 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:30.645 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:30.645 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:30.645 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:30.904 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:30.904 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:30.904 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:30.904 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:30.904 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:30.904 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:30.904 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:30.904 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:30.904 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:30.904 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:30.904 11:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:31.163 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:31.163 "name": "BaseBdev2", 00:25:31.163 "aliases": [ 00:25:31.163 "6e4cec30-1a84-47c1-a1bd-3f4a9b3ea373" 00:25:31.163 ], 00:25:31.163 "product_name": "Malloc disk", 00:25:31.163 "block_size": 512, 00:25:31.163 "num_blocks": 65536, 00:25:31.163 "uuid": "6e4cec30-1a84-47c1-a1bd-3f4a9b3ea373", 00:25:31.163 "assigned_rate_limits": { 00:25:31.163 "rw_ios_per_sec": 0, 00:25:31.163 "rw_mbytes_per_sec": 0, 00:25:31.163 "r_mbytes_per_sec": 0, 00:25:31.163 "w_mbytes_per_sec": 0 00:25:31.163 }, 00:25:31.163 "claimed": true, 00:25:31.163 "claim_type": "exclusive_write", 00:25:31.163 "zoned": false, 00:25:31.163 "supported_io_types": { 00:25:31.163 "read": true, 00:25:31.163 "write": true, 00:25:31.163 "unmap": true, 00:25:31.163 "flush": true, 00:25:31.163 "reset": true, 00:25:31.163 "nvme_admin": false, 00:25:31.163 "nvme_io": false, 00:25:31.163 "nvme_io_md": false, 00:25:31.163 "write_zeroes": true, 00:25:31.163 "zcopy": true, 00:25:31.163 "get_zone_info": false, 00:25:31.163 "zone_management": false, 00:25:31.163 "zone_append": false, 00:25:31.163 "compare": false, 00:25:31.163 "compare_and_write": false, 00:25:31.163 "abort": true, 00:25:31.163 "seek_hole": false, 00:25:31.163 "seek_data": false, 00:25:31.163 "copy": true, 00:25:31.163 "nvme_iov_md": false 00:25:31.163 }, 00:25:31.163 "memory_domains": [ 00:25:31.163 { 00:25:31.163 "dma_device_id": "system", 00:25:31.163 "dma_device_type": 1 00:25:31.163 }, 00:25:31.163 { 00:25:31.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.163 "dma_device_type": 2 00:25:31.163 } 00:25:31.163 ], 00:25:31.163 "driver_specific": {} 00:25:31.163 }' 00:25:31.163 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.163 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.422 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:31.422 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.422 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.422 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:31.422 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.422 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.422 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:31.422 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.422 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.681 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:31.681 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:31.681 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:31.681 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:31.681 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:31.681 "name": "BaseBdev3", 00:25:31.681 "aliases": [ 00:25:31.681 "265ec6dd-c907-4e54-9be3-8e0a53459f17" 00:25:31.681 ], 00:25:31.681 "product_name": "Malloc disk", 00:25:31.681 "block_size": 512, 00:25:31.681 "num_blocks": 65536, 00:25:31.681 "uuid": "265ec6dd-c907-4e54-9be3-8e0a53459f17", 00:25:31.681 "assigned_rate_limits": { 00:25:31.681 "rw_ios_per_sec": 0, 00:25:31.681 "rw_mbytes_per_sec": 0, 00:25:31.681 "r_mbytes_per_sec": 0, 00:25:31.681 "w_mbytes_per_sec": 0 00:25:31.681 }, 00:25:31.681 "claimed": true, 00:25:31.681 "claim_type": "exclusive_write", 00:25:31.681 "zoned": false, 00:25:31.681 "supported_io_types": { 00:25:31.681 "read": true, 00:25:31.681 "write": true, 00:25:31.681 "unmap": true, 00:25:31.681 "flush": true, 00:25:31.681 "reset": true, 00:25:31.681 "nvme_admin": false, 00:25:31.681 "nvme_io": false, 00:25:31.681 "nvme_io_md": false, 00:25:31.681 "write_zeroes": true, 00:25:31.681 "zcopy": true, 00:25:31.681 "get_zone_info": false, 00:25:31.681 "zone_management": false, 00:25:31.681 "zone_append": false, 00:25:31.681 "compare": false, 00:25:31.681 "compare_and_write": false, 00:25:31.681 "abort": true, 00:25:31.681 "seek_hole": false, 00:25:31.681 "seek_data": false, 00:25:31.681 "copy": true, 00:25:31.681 "nvme_iov_md": false 00:25:31.681 }, 00:25:31.681 "memory_domains": [ 00:25:31.681 { 00:25:31.681 "dma_device_id": "system", 00:25:31.681 "dma_device_type": 1 00:25:31.681 }, 00:25:31.681 { 00:25:31.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.681 "dma_device_type": 2 00:25:31.681 } 00:25:31.681 ], 00:25:31.681 "driver_specific": {} 00:25:31.681 }' 00:25:31.681 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.941 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.941 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:31.941 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.941 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.941 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:31.941 11:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.941 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.941 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:31.941 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.201 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.201 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:32.201 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:32.201 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:32.201 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:32.462 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:32.462 "name": "BaseBdev4", 00:25:32.462 "aliases": [ 00:25:32.462 "f6fb7f3a-1d2e-4027-bd01-cb9230b35080" 00:25:32.462 ], 00:25:32.462 "product_name": "Malloc disk", 00:25:32.462 "block_size": 512, 00:25:32.462 "num_blocks": 65536, 00:25:32.462 "uuid": "f6fb7f3a-1d2e-4027-bd01-cb9230b35080", 00:25:32.462 "assigned_rate_limits": { 00:25:32.462 "rw_ios_per_sec": 0, 00:25:32.462 "rw_mbytes_per_sec": 0, 00:25:32.462 "r_mbytes_per_sec": 0, 00:25:32.462 "w_mbytes_per_sec": 0 00:25:32.462 }, 00:25:32.462 "claimed": true, 00:25:32.462 "claim_type": "exclusive_write", 00:25:32.462 "zoned": false, 00:25:32.462 "supported_io_types": { 00:25:32.462 "read": true, 00:25:32.462 "write": true, 00:25:32.462 "unmap": true, 00:25:32.462 "flush": true, 00:25:32.462 "reset": true, 00:25:32.462 "nvme_admin": false, 00:25:32.462 "nvme_io": false, 00:25:32.462 "nvme_io_md": false, 00:25:32.462 "write_zeroes": true, 00:25:32.462 "zcopy": true, 00:25:32.462 "get_zone_info": false, 00:25:32.462 "zone_management": false, 00:25:32.462 "zone_append": false, 00:25:32.462 "compare": false, 00:25:32.462 "compare_and_write": false, 00:25:32.462 "abort": true, 00:25:32.462 "seek_hole": false, 00:25:32.462 "seek_data": false, 00:25:32.462 "copy": true, 00:25:32.462 "nvme_iov_md": false 00:25:32.462 }, 00:25:32.462 "memory_domains": [ 00:25:32.462 { 00:25:32.462 "dma_device_id": "system", 00:25:32.462 "dma_device_type": 1 00:25:32.462 }, 00:25:32.462 { 00:25:32.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.462 "dma_device_type": 2 00:25:32.462 } 00:25:32.462 ], 00:25:32.462 "driver_specific": {} 00:25:32.462 }' 00:25:32.462 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.462 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.462 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:32.462 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.462 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.462 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:32.462 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.462 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.722 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:32.722 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.722 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.723 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:32.723 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:33.044 [2024-07-25 11:07:39.912615] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:33.044 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:33.044 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:33.044 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:33.044 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:25:33.044 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:33.044 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:33.044 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:33.044 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:33.045 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:33.045 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:33.045 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:33.045 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.045 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.045 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.045 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.045 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.045 11:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:33.303 11:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:33.303 "name": "Existed_Raid", 00:25:33.303 "uuid": "bf7561af-7e85-43c5-86e4-b01095c3cf8f", 00:25:33.303 "strip_size_kb": 0, 00:25:33.303 "state": "online", 00:25:33.303 "raid_level": "raid1", 00:25:33.303 "superblock": false, 00:25:33.303 "num_base_bdevs": 4, 00:25:33.303 "num_base_bdevs_discovered": 3, 00:25:33.303 "num_base_bdevs_operational": 3, 00:25:33.303 "base_bdevs_list": [ 00:25:33.303 { 00:25:33.303 "name": null, 00:25:33.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.303 "is_configured": false, 00:25:33.303 "data_offset": 0, 00:25:33.303 "data_size": 65536 00:25:33.303 }, 00:25:33.303 { 00:25:33.303 "name": "BaseBdev2", 00:25:33.303 "uuid": "6e4cec30-1a84-47c1-a1bd-3f4a9b3ea373", 00:25:33.303 "is_configured": true, 00:25:33.303 "data_offset": 0, 00:25:33.303 "data_size": 65536 00:25:33.303 }, 00:25:33.303 { 00:25:33.303 "name": "BaseBdev3", 00:25:33.303 "uuid": "265ec6dd-c907-4e54-9be3-8e0a53459f17", 00:25:33.303 "is_configured": true, 00:25:33.303 "data_offset": 0, 00:25:33.303 "data_size": 65536 00:25:33.303 }, 00:25:33.303 { 00:25:33.303 "name": "BaseBdev4", 00:25:33.303 "uuid": "f6fb7f3a-1d2e-4027-bd01-cb9230b35080", 00:25:33.303 "is_configured": true, 00:25:33.303 "data_offset": 0, 00:25:33.303 "data_size": 65536 00:25:33.303 } 00:25:33.303 ] 00:25:33.303 }' 00:25:33.303 11:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:33.303 11:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.870 11:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:33.870 11:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:33.870 11:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.870 11:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:34.129 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:34.129 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:34.129 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:34.129 [2024-07-25 11:07:41.218093] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:34.389 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:34.389 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:34.389 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.389 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:34.671 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:34.671 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:34.671 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:34.953 [2024-07-25 11:07:41.807015] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:34.953 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:34.953 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:34.953 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:34.953 11:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.213 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:35.213 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:35.213 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:35.473 [2024-07-25 11:07:42.399310] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:35.473 [2024-07-25 11:07:42.399410] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:35.473 [2024-07-25 11:07:42.529269] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:35.473 [2024-07-25 11:07:42.529323] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:35.473 [2024-07-25 11:07:42.529341] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:25:35.473 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:35.473 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:35.473 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.473 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:35.736 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:35.736 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:35.736 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:35.736 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:35.736 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:35.736 11:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:35.995 BaseBdev2 00:25:35.995 11:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:35.995 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:35.995 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:35.995 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:35.995 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:35.995 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:35.995 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:36.255 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:36.514 [ 00:25:36.514 { 00:25:36.514 "name": "BaseBdev2", 00:25:36.514 "aliases": [ 00:25:36.514 "c1fff99f-8083-412a-8802-d8a578c0c657" 00:25:36.514 ], 00:25:36.514 "product_name": "Malloc disk", 00:25:36.514 "block_size": 512, 00:25:36.514 "num_blocks": 65536, 00:25:36.514 "uuid": "c1fff99f-8083-412a-8802-d8a578c0c657", 00:25:36.514 "assigned_rate_limits": { 00:25:36.514 "rw_ios_per_sec": 0, 00:25:36.514 "rw_mbytes_per_sec": 0, 00:25:36.514 "r_mbytes_per_sec": 0, 00:25:36.514 "w_mbytes_per_sec": 0 00:25:36.514 }, 00:25:36.514 "claimed": false, 00:25:36.514 "zoned": false, 00:25:36.514 "supported_io_types": { 00:25:36.514 "read": true, 00:25:36.514 "write": true, 00:25:36.514 "unmap": true, 00:25:36.514 "flush": true, 00:25:36.514 "reset": true, 00:25:36.514 "nvme_admin": false, 00:25:36.514 "nvme_io": false, 00:25:36.514 "nvme_io_md": false, 00:25:36.514 "write_zeroes": true, 00:25:36.514 "zcopy": true, 00:25:36.514 "get_zone_info": false, 00:25:36.514 "zone_management": false, 00:25:36.514 "zone_append": false, 00:25:36.514 "compare": false, 00:25:36.514 "compare_and_write": false, 00:25:36.514 "abort": true, 00:25:36.514 "seek_hole": false, 00:25:36.514 "seek_data": false, 00:25:36.514 "copy": true, 00:25:36.514 "nvme_iov_md": false 00:25:36.514 }, 00:25:36.514 "memory_domains": [ 00:25:36.514 { 00:25:36.514 "dma_device_id": "system", 00:25:36.514 "dma_device_type": 1 00:25:36.514 }, 00:25:36.514 { 00:25:36.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:36.514 "dma_device_type": 2 00:25:36.514 } 00:25:36.514 ], 00:25:36.514 "driver_specific": {} 00:25:36.514 } 00:25:36.514 ] 00:25:36.514 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:36.514 11:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:36.514 11:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:36.514 11:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:36.773 BaseBdev3 00:25:36.773 11:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:36.773 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:25:36.773 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:36.773 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:36.773 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:36.773 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:36.773 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:37.032 11:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:37.291 [ 00:25:37.291 { 00:25:37.291 "name": "BaseBdev3", 00:25:37.291 "aliases": [ 00:25:37.291 "303ec2f2-2879-450c-80e8-b0888fa0de0d" 00:25:37.291 ], 00:25:37.291 "product_name": "Malloc disk", 00:25:37.291 "block_size": 512, 00:25:37.291 "num_blocks": 65536, 00:25:37.291 "uuid": "303ec2f2-2879-450c-80e8-b0888fa0de0d", 00:25:37.291 "assigned_rate_limits": { 00:25:37.291 "rw_ios_per_sec": 0, 00:25:37.291 "rw_mbytes_per_sec": 0, 00:25:37.291 "r_mbytes_per_sec": 0, 00:25:37.291 "w_mbytes_per_sec": 0 00:25:37.291 }, 00:25:37.291 "claimed": false, 00:25:37.291 "zoned": false, 00:25:37.291 "supported_io_types": { 00:25:37.291 "read": true, 00:25:37.291 "write": true, 00:25:37.291 "unmap": true, 00:25:37.291 "flush": true, 00:25:37.291 "reset": true, 00:25:37.291 "nvme_admin": false, 00:25:37.291 "nvme_io": false, 00:25:37.291 "nvme_io_md": false, 00:25:37.291 "write_zeroes": true, 00:25:37.291 "zcopy": true, 00:25:37.291 "get_zone_info": false, 00:25:37.291 "zone_management": false, 00:25:37.291 "zone_append": false, 00:25:37.291 "compare": false, 00:25:37.291 "compare_and_write": false, 00:25:37.291 "abort": true, 00:25:37.291 "seek_hole": false, 00:25:37.291 "seek_data": false, 00:25:37.291 "copy": true, 00:25:37.291 "nvme_iov_md": false 00:25:37.291 }, 00:25:37.291 "memory_domains": [ 00:25:37.291 { 00:25:37.291 "dma_device_id": "system", 00:25:37.291 "dma_device_type": 1 00:25:37.291 }, 00:25:37.291 { 00:25:37.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:37.291 "dma_device_type": 2 00:25:37.291 } 00:25:37.291 ], 00:25:37.291 "driver_specific": {} 00:25:37.291 } 00:25:37.291 ] 00:25:37.291 11:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:37.291 11:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:37.291 11:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:37.291 11:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:37.549 BaseBdev4 00:25:37.549 11:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:37.549 11:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:25:37.549 11:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:37.549 11:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:37.549 11:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:37.549 11:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:37.549 11:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:37.808 11:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:38.067 [ 00:25:38.067 { 00:25:38.067 "name": "BaseBdev4", 00:25:38.067 "aliases": [ 00:25:38.067 "3e24c86a-decd-4f8e-9a28-7a3221cba46a" 00:25:38.067 ], 00:25:38.067 "product_name": "Malloc disk", 00:25:38.067 "block_size": 512, 00:25:38.067 "num_blocks": 65536, 00:25:38.067 "uuid": "3e24c86a-decd-4f8e-9a28-7a3221cba46a", 00:25:38.067 "assigned_rate_limits": { 00:25:38.067 "rw_ios_per_sec": 0, 00:25:38.067 "rw_mbytes_per_sec": 0, 00:25:38.067 "r_mbytes_per_sec": 0, 00:25:38.068 "w_mbytes_per_sec": 0 00:25:38.068 }, 00:25:38.068 "claimed": false, 00:25:38.068 "zoned": false, 00:25:38.068 "supported_io_types": { 00:25:38.068 "read": true, 00:25:38.068 "write": true, 00:25:38.068 "unmap": true, 00:25:38.068 "flush": true, 00:25:38.068 "reset": true, 00:25:38.068 "nvme_admin": false, 00:25:38.068 "nvme_io": false, 00:25:38.068 "nvme_io_md": false, 00:25:38.068 "write_zeroes": true, 00:25:38.068 "zcopy": true, 00:25:38.068 "get_zone_info": false, 00:25:38.068 "zone_management": false, 00:25:38.068 "zone_append": false, 00:25:38.068 "compare": false, 00:25:38.068 "compare_and_write": false, 00:25:38.068 "abort": true, 00:25:38.068 "seek_hole": false, 00:25:38.068 "seek_data": false, 00:25:38.068 "copy": true, 00:25:38.068 "nvme_iov_md": false 00:25:38.068 }, 00:25:38.068 "memory_domains": [ 00:25:38.068 { 00:25:38.068 "dma_device_id": "system", 00:25:38.068 "dma_device_type": 1 00:25:38.068 }, 00:25:38.068 { 00:25:38.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.068 "dma_device_type": 2 00:25:38.068 } 00:25:38.068 ], 00:25:38.068 "driver_specific": {} 00:25:38.068 } 00:25:38.068 ] 00:25:38.068 11:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:38.068 11:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:38.068 11:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:38.068 11:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:38.068 [2024-07-25 11:07:45.138201] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:38.068 [2024-07-25 11:07:45.138248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:38.068 [2024-07-25 11:07:45.138276] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:38.068 [2024-07-25 11:07:45.140600] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:38.068 [2024-07-25 11:07:45.140654] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:38.068 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:38.068 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:38.068 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:38.068 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:38.068 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:38.068 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:38.068 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:38.068 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:38.068 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:38.068 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:38.068 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.068 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:38.327 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:38.327 "name": "Existed_Raid", 00:25:38.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.327 "strip_size_kb": 0, 00:25:38.327 "state": "configuring", 00:25:38.327 "raid_level": "raid1", 00:25:38.327 "superblock": false, 00:25:38.327 "num_base_bdevs": 4, 00:25:38.327 "num_base_bdevs_discovered": 3, 00:25:38.327 "num_base_bdevs_operational": 4, 00:25:38.327 "base_bdevs_list": [ 00:25:38.327 { 00:25:38.327 "name": "BaseBdev1", 00:25:38.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:38.327 "is_configured": false, 00:25:38.327 "data_offset": 0, 00:25:38.327 "data_size": 0 00:25:38.327 }, 00:25:38.327 { 00:25:38.327 "name": "BaseBdev2", 00:25:38.327 "uuid": "c1fff99f-8083-412a-8802-d8a578c0c657", 00:25:38.327 "is_configured": true, 00:25:38.327 "data_offset": 0, 00:25:38.327 "data_size": 65536 00:25:38.327 }, 00:25:38.327 { 00:25:38.327 "name": "BaseBdev3", 00:25:38.327 "uuid": "303ec2f2-2879-450c-80e8-b0888fa0de0d", 00:25:38.327 "is_configured": true, 00:25:38.327 "data_offset": 0, 00:25:38.327 "data_size": 65536 00:25:38.327 }, 00:25:38.327 { 00:25:38.327 "name": "BaseBdev4", 00:25:38.327 "uuid": "3e24c86a-decd-4f8e-9a28-7a3221cba46a", 00:25:38.327 "is_configured": true, 00:25:38.327 "data_offset": 0, 00:25:38.327 "data_size": 65536 00:25:38.327 } 00:25:38.327 ] 00:25:38.327 }' 00:25:38.327 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:38.327 11:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.895 11:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:39.155 [2024-07-25 11:07:46.144887] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:39.155 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:39.155 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:39.155 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:39.155 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:39.155 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:39.155 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:39.155 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:39.155 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:39.155 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:39.155 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:39.155 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:39.155 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.414 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:39.414 "name": "Existed_Raid", 00:25:39.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.414 "strip_size_kb": 0, 00:25:39.414 "state": "configuring", 00:25:39.414 "raid_level": "raid1", 00:25:39.414 "superblock": false, 00:25:39.414 "num_base_bdevs": 4, 00:25:39.414 "num_base_bdevs_discovered": 2, 00:25:39.414 "num_base_bdevs_operational": 4, 00:25:39.414 "base_bdevs_list": [ 00:25:39.414 { 00:25:39.414 "name": "BaseBdev1", 00:25:39.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:39.414 "is_configured": false, 00:25:39.414 "data_offset": 0, 00:25:39.414 "data_size": 0 00:25:39.414 }, 00:25:39.414 { 00:25:39.414 "name": null, 00:25:39.414 "uuid": "c1fff99f-8083-412a-8802-d8a578c0c657", 00:25:39.414 "is_configured": false, 00:25:39.414 "data_offset": 0, 00:25:39.414 "data_size": 65536 00:25:39.414 }, 00:25:39.414 { 00:25:39.414 "name": "BaseBdev3", 00:25:39.414 "uuid": "303ec2f2-2879-450c-80e8-b0888fa0de0d", 00:25:39.414 "is_configured": true, 00:25:39.414 "data_offset": 0, 00:25:39.414 "data_size": 65536 00:25:39.414 }, 00:25:39.414 { 00:25:39.414 "name": "BaseBdev4", 00:25:39.414 "uuid": "3e24c86a-decd-4f8e-9a28-7a3221cba46a", 00:25:39.414 "is_configured": true, 00:25:39.414 "data_offset": 0, 00:25:39.414 "data_size": 65536 00:25:39.414 } 00:25:39.414 ] 00:25:39.414 }' 00:25:39.414 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:39.414 11:07:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:39.982 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.982 11:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:40.242 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:40.242 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:40.501 [2024-07-25 11:07:47.448325] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:40.501 BaseBdev1 00:25:40.501 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:40.501 11:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:40.501 11:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:40.501 11:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:40.501 11:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:40.501 11:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:40.501 11:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:40.760 11:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:41.019 [ 00:25:41.019 { 00:25:41.019 "name": "BaseBdev1", 00:25:41.019 "aliases": [ 00:25:41.019 "90bbd846-dcde-4288-8220-bc6cd4283ccd" 00:25:41.019 ], 00:25:41.019 "product_name": "Malloc disk", 00:25:41.019 "block_size": 512, 00:25:41.019 "num_blocks": 65536, 00:25:41.019 "uuid": "90bbd846-dcde-4288-8220-bc6cd4283ccd", 00:25:41.019 "assigned_rate_limits": { 00:25:41.019 "rw_ios_per_sec": 0, 00:25:41.019 "rw_mbytes_per_sec": 0, 00:25:41.019 "r_mbytes_per_sec": 0, 00:25:41.019 "w_mbytes_per_sec": 0 00:25:41.019 }, 00:25:41.019 "claimed": true, 00:25:41.019 "claim_type": "exclusive_write", 00:25:41.019 "zoned": false, 00:25:41.019 "supported_io_types": { 00:25:41.019 "read": true, 00:25:41.019 "write": true, 00:25:41.019 "unmap": true, 00:25:41.019 "flush": true, 00:25:41.019 "reset": true, 00:25:41.019 "nvme_admin": false, 00:25:41.019 "nvme_io": false, 00:25:41.019 "nvme_io_md": false, 00:25:41.019 "write_zeroes": true, 00:25:41.019 "zcopy": true, 00:25:41.019 "get_zone_info": false, 00:25:41.019 "zone_management": false, 00:25:41.019 "zone_append": false, 00:25:41.019 "compare": false, 00:25:41.019 "compare_and_write": false, 00:25:41.019 "abort": true, 00:25:41.019 "seek_hole": false, 00:25:41.019 "seek_data": false, 00:25:41.019 "copy": true, 00:25:41.019 "nvme_iov_md": false 00:25:41.019 }, 00:25:41.019 "memory_domains": [ 00:25:41.019 { 00:25:41.019 "dma_device_id": "system", 00:25:41.019 "dma_device_type": 1 00:25:41.019 }, 00:25:41.019 { 00:25:41.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.019 "dma_device_type": 2 00:25:41.019 } 00:25:41.019 ], 00:25:41.019 "driver_specific": {} 00:25:41.019 } 00:25:41.019 ] 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.019 11:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:41.278 11:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:41.278 "name": "Existed_Raid", 00:25:41.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:41.278 "strip_size_kb": 0, 00:25:41.278 "state": "configuring", 00:25:41.278 "raid_level": "raid1", 00:25:41.278 "superblock": false, 00:25:41.278 "num_base_bdevs": 4, 00:25:41.278 "num_base_bdevs_discovered": 3, 00:25:41.278 "num_base_bdevs_operational": 4, 00:25:41.278 "base_bdevs_list": [ 00:25:41.278 { 00:25:41.278 "name": "BaseBdev1", 00:25:41.278 "uuid": "90bbd846-dcde-4288-8220-bc6cd4283ccd", 00:25:41.278 "is_configured": true, 00:25:41.278 "data_offset": 0, 00:25:41.278 "data_size": 65536 00:25:41.278 }, 00:25:41.278 { 00:25:41.278 "name": null, 00:25:41.278 "uuid": "c1fff99f-8083-412a-8802-d8a578c0c657", 00:25:41.278 "is_configured": false, 00:25:41.278 "data_offset": 0, 00:25:41.278 "data_size": 65536 00:25:41.278 }, 00:25:41.278 { 00:25:41.278 "name": "BaseBdev3", 00:25:41.278 "uuid": "303ec2f2-2879-450c-80e8-b0888fa0de0d", 00:25:41.278 "is_configured": true, 00:25:41.278 "data_offset": 0, 00:25:41.278 "data_size": 65536 00:25:41.278 }, 00:25:41.278 { 00:25:41.278 "name": "BaseBdev4", 00:25:41.278 "uuid": "3e24c86a-decd-4f8e-9a28-7a3221cba46a", 00:25:41.278 "is_configured": true, 00:25:41.278 "data_offset": 0, 00:25:41.278 "data_size": 65536 00:25:41.278 } 00:25:41.278 ] 00:25:41.278 }' 00:25:41.278 11:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:41.278 11:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:41.846 11:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.846 11:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:41.846 11:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:41.846 11:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:42.105 [2024-07-25 11:07:49.144999] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:42.105 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:42.105 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:42.105 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:42.105 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:42.105 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:42.105 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:42.105 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:42.105 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:42.105 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:42.105 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:42.105 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.105 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.363 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.363 "name": "Existed_Raid", 00:25:42.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.363 "strip_size_kb": 0, 00:25:42.363 "state": "configuring", 00:25:42.363 "raid_level": "raid1", 00:25:42.364 "superblock": false, 00:25:42.364 "num_base_bdevs": 4, 00:25:42.364 "num_base_bdevs_discovered": 2, 00:25:42.364 "num_base_bdevs_operational": 4, 00:25:42.364 "base_bdevs_list": [ 00:25:42.364 { 00:25:42.364 "name": "BaseBdev1", 00:25:42.364 "uuid": "90bbd846-dcde-4288-8220-bc6cd4283ccd", 00:25:42.364 "is_configured": true, 00:25:42.364 "data_offset": 0, 00:25:42.364 "data_size": 65536 00:25:42.364 }, 00:25:42.364 { 00:25:42.364 "name": null, 00:25:42.364 "uuid": "c1fff99f-8083-412a-8802-d8a578c0c657", 00:25:42.364 "is_configured": false, 00:25:42.364 "data_offset": 0, 00:25:42.364 "data_size": 65536 00:25:42.364 }, 00:25:42.364 { 00:25:42.364 "name": null, 00:25:42.364 "uuid": "303ec2f2-2879-450c-80e8-b0888fa0de0d", 00:25:42.364 "is_configured": false, 00:25:42.364 "data_offset": 0, 00:25:42.364 "data_size": 65536 00:25:42.364 }, 00:25:42.364 { 00:25:42.364 "name": "BaseBdev4", 00:25:42.364 "uuid": "3e24c86a-decd-4f8e-9a28-7a3221cba46a", 00:25:42.364 "is_configured": true, 00:25:42.364 "data_offset": 0, 00:25:42.364 "data_size": 65536 00:25:42.364 } 00:25:42.364 ] 00:25:42.364 }' 00:25:42.364 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.364 11:07:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.933 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.933 11:07:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:43.192 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:43.192 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:43.451 [2024-07-25 11:07:50.412428] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:43.451 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:43.451 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:43.451 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:43.451 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:43.451 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:43.451 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:43.451 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:43.451 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:43.451 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:43.451 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:43.451 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.451 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:43.710 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:43.710 "name": "Existed_Raid", 00:25:43.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:43.710 "strip_size_kb": 0, 00:25:43.710 "state": "configuring", 00:25:43.710 "raid_level": "raid1", 00:25:43.710 "superblock": false, 00:25:43.710 "num_base_bdevs": 4, 00:25:43.710 "num_base_bdevs_discovered": 3, 00:25:43.710 "num_base_bdevs_operational": 4, 00:25:43.710 "base_bdevs_list": [ 00:25:43.710 { 00:25:43.710 "name": "BaseBdev1", 00:25:43.710 "uuid": "90bbd846-dcde-4288-8220-bc6cd4283ccd", 00:25:43.710 "is_configured": true, 00:25:43.710 "data_offset": 0, 00:25:43.710 "data_size": 65536 00:25:43.710 }, 00:25:43.710 { 00:25:43.710 "name": null, 00:25:43.711 "uuid": "c1fff99f-8083-412a-8802-d8a578c0c657", 00:25:43.711 "is_configured": false, 00:25:43.711 "data_offset": 0, 00:25:43.711 "data_size": 65536 00:25:43.711 }, 00:25:43.711 { 00:25:43.711 "name": "BaseBdev3", 00:25:43.711 "uuid": "303ec2f2-2879-450c-80e8-b0888fa0de0d", 00:25:43.711 "is_configured": true, 00:25:43.711 "data_offset": 0, 00:25:43.711 "data_size": 65536 00:25:43.711 }, 00:25:43.711 { 00:25:43.711 "name": "BaseBdev4", 00:25:43.711 "uuid": "3e24c86a-decd-4f8e-9a28-7a3221cba46a", 00:25:43.711 "is_configured": true, 00:25:43.711 "data_offset": 0, 00:25:43.711 "data_size": 65536 00:25:43.711 } 00:25:43.711 ] 00:25:43.711 }' 00:25:43.711 11:07:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:43.711 11:07:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.278 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.278 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:44.537 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:44.537 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:44.797 [2024-07-25 11:07:51.679901] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:44.797 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:44.797 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:44.797 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:44.797 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:44.797 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:44.797 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:44.797 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:44.797 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:44.797 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:44.797 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:44.797 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.797 11:07:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:45.057 11:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:45.057 "name": "Existed_Raid", 00:25:45.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:45.057 "strip_size_kb": 0, 00:25:45.057 "state": "configuring", 00:25:45.057 "raid_level": "raid1", 00:25:45.057 "superblock": false, 00:25:45.057 "num_base_bdevs": 4, 00:25:45.057 "num_base_bdevs_discovered": 2, 00:25:45.057 "num_base_bdevs_operational": 4, 00:25:45.057 "base_bdevs_list": [ 00:25:45.057 { 00:25:45.057 "name": null, 00:25:45.057 "uuid": "90bbd846-dcde-4288-8220-bc6cd4283ccd", 00:25:45.057 "is_configured": false, 00:25:45.057 "data_offset": 0, 00:25:45.057 "data_size": 65536 00:25:45.057 }, 00:25:45.057 { 00:25:45.057 "name": null, 00:25:45.057 "uuid": "c1fff99f-8083-412a-8802-d8a578c0c657", 00:25:45.057 "is_configured": false, 00:25:45.057 "data_offset": 0, 00:25:45.057 "data_size": 65536 00:25:45.057 }, 00:25:45.057 { 00:25:45.057 "name": "BaseBdev3", 00:25:45.057 "uuid": "303ec2f2-2879-450c-80e8-b0888fa0de0d", 00:25:45.057 "is_configured": true, 00:25:45.057 "data_offset": 0, 00:25:45.057 "data_size": 65536 00:25:45.057 }, 00:25:45.057 { 00:25:45.057 "name": "BaseBdev4", 00:25:45.057 "uuid": "3e24c86a-decd-4f8e-9a28-7a3221cba46a", 00:25:45.057 "is_configured": true, 00:25:45.057 "data_offset": 0, 00:25:45.057 "data_size": 65536 00:25:45.057 } 00:25:45.057 ] 00:25:45.057 }' 00:25:45.057 11:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:45.057 11:07:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.625 11:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.625 11:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:45.884 11:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:45.884 11:07:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:46.144 [2024-07-25 11:07:53.080055] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:46.144 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:46.144 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:46.144 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:46.144 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:46.144 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:46.144 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:46.144 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:46.144 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:46.144 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:46.144 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:46.144 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.144 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:46.414 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.414 "name": "Existed_Raid", 00:25:46.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:46.414 "strip_size_kb": 0, 00:25:46.414 "state": "configuring", 00:25:46.414 "raid_level": "raid1", 00:25:46.414 "superblock": false, 00:25:46.414 "num_base_bdevs": 4, 00:25:46.415 "num_base_bdevs_discovered": 3, 00:25:46.415 "num_base_bdevs_operational": 4, 00:25:46.415 "base_bdevs_list": [ 00:25:46.415 { 00:25:46.415 "name": null, 00:25:46.415 "uuid": "90bbd846-dcde-4288-8220-bc6cd4283ccd", 00:25:46.415 "is_configured": false, 00:25:46.415 "data_offset": 0, 00:25:46.415 "data_size": 65536 00:25:46.415 }, 00:25:46.415 { 00:25:46.415 "name": "BaseBdev2", 00:25:46.415 "uuid": "c1fff99f-8083-412a-8802-d8a578c0c657", 00:25:46.415 "is_configured": true, 00:25:46.415 "data_offset": 0, 00:25:46.415 "data_size": 65536 00:25:46.415 }, 00:25:46.415 { 00:25:46.415 "name": "BaseBdev3", 00:25:46.415 "uuid": "303ec2f2-2879-450c-80e8-b0888fa0de0d", 00:25:46.415 "is_configured": true, 00:25:46.415 "data_offset": 0, 00:25:46.415 "data_size": 65536 00:25:46.415 }, 00:25:46.415 { 00:25:46.415 "name": "BaseBdev4", 00:25:46.415 "uuid": "3e24c86a-decd-4f8e-9a28-7a3221cba46a", 00:25:46.415 "is_configured": true, 00:25:46.415 "data_offset": 0, 00:25:46.415 "data_size": 65536 00:25:46.415 } 00:25:46.415 ] 00:25:46.415 }' 00:25:46.415 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.415 11:07:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.983 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.983 11:07:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:47.242 11:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:47.242 11:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.242 11:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:47.242 11:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 90bbd846-dcde-4288-8220-bc6cd4283ccd 00:25:47.503 [2024-07-25 11:07:54.606115] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:47.503 [2024-07-25 11:07:54.606172] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:25:47.503 [2024-07-25 11:07:54.606192] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:47.503 [2024-07-25 11:07:54.606517] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:25:47.503 [2024-07-25 11:07:54.606747] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:25:47.503 [2024-07-25 11:07:54.606761] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:25:47.503 [2024-07-25 11:07:54.607094] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.503 NewBaseBdev 00:25:47.828 11:07:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:47.828 11:07:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:25:47.828 11:07:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:47.828 11:07:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:25:47.828 11:07:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:47.828 11:07:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:47.828 11:07:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:47.828 11:07:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:48.088 [ 00:25:48.088 { 00:25:48.088 "name": "NewBaseBdev", 00:25:48.088 "aliases": [ 00:25:48.088 "90bbd846-dcde-4288-8220-bc6cd4283ccd" 00:25:48.088 ], 00:25:48.088 "product_name": "Malloc disk", 00:25:48.088 "block_size": 512, 00:25:48.088 "num_blocks": 65536, 00:25:48.088 "uuid": "90bbd846-dcde-4288-8220-bc6cd4283ccd", 00:25:48.088 "assigned_rate_limits": { 00:25:48.088 "rw_ios_per_sec": 0, 00:25:48.088 "rw_mbytes_per_sec": 0, 00:25:48.088 "r_mbytes_per_sec": 0, 00:25:48.088 "w_mbytes_per_sec": 0 00:25:48.088 }, 00:25:48.088 "claimed": true, 00:25:48.088 "claim_type": "exclusive_write", 00:25:48.088 "zoned": false, 00:25:48.088 "supported_io_types": { 00:25:48.088 "read": true, 00:25:48.088 "write": true, 00:25:48.088 "unmap": true, 00:25:48.088 "flush": true, 00:25:48.088 "reset": true, 00:25:48.088 "nvme_admin": false, 00:25:48.088 "nvme_io": false, 00:25:48.088 "nvme_io_md": false, 00:25:48.088 "write_zeroes": true, 00:25:48.088 "zcopy": true, 00:25:48.088 "get_zone_info": false, 00:25:48.088 "zone_management": false, 00:25:48.088 "zone_append": false, 00:25:48.088 "compare": false, 00:25:48.088 "compare_and_write": false, 00:25:48.088 "abort": true, 00:25:48.088 "seek_hole": false, 00:25:48.088 "seek_data": false, 00:25:48.088 "copy": true, 00:25:48.088 "nvme_iov_md": false 00:25:48.088 }, 00:25:48.088 "memory_domains": [ 00:25:48.088 { 00:25:48.088 "dma_device_id": "system", 00:25:48.088 "dma_device_type": 1 00:25:48.088 }, 00:25:48.088 { 00:25:48.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.088 "dma_device_type": 2 00:25:48.088 } 00:25:48.088 ], 00:25:48.088 "driver_specific": {} 00:25:48.088 } 00:25:48.088 ] 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.088 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:48.347 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:48.347 "name": "Existed_Raid", 00:25:48.347 "uuid": "e1472536-c4ce-4977-9d5e-caaa47e85b29", 00:25:48.347 "strip_size_kb": 0, 00:25:48.347 "state": "online", 00:25:48.347 "raid_level": "raid1", 00:25:48.347 "superblock": false, 00:25:48.347 "num_base_bdevs": 4, 00:25:48.347 "num_base_bdevs_discovered": 4, 00:25:48.347 "num_base_bdevs_operational": 4, 00:25:48.347 "base_bdevs_list": [ 00:25:48.347 { 00:25:48.347 "name": "NewBaseBdev", 00:25:48.347 "uuid": "90bbd846-dcde-4288-8220-bc6cd4283ccd", 00:25:48.347 "is_configured": true, 00:25:48.347 "data_offset": 0, 00:25:48.347 "data_size": 65536 00:25:48.347 }, 00:25:48.347 { 00:25:48.347 "name": "BaseBdev2", 00:25:48.347 "uuid": "c1fff99f-8083-412a-8802-d8a578c0c657", 00:25:48.347 "is_configured": true, 00:25:48.347 "data_offset": 0, 00:25:48.347 "data_size": 65536 00:25:48.347 }, 00:25:48.347 { 00:25:48.347 "name": "BaseBdev3", 00:25:48.347 "uuid": "303ec2f2-2879-450c-80e8-b0888fa0de0d", 00:25:48.347 "is_configured": true, 00:25:48.347 "data_offset": 0, 00:25:48.347 "data_size": 65536 00:25:48.347 }, 00:25:48.347 { 00:25:48.347 "name": "BaseBdev4", 00:25:48.347 "uuid": "3e24c86a-decd-4f8e-9a28-7a3221cba46a", 00:25:48.347 "is_configured": true, 00:25:48.347 "data_offset": 0, 00:25:48.347 "data_size": 65536 00:25:48.347 } 00:25:48.347 ] 00:25:48.347 }' 00:25:48.347 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:48.347 11:07:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.914 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:48.914 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:48.914 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:48.914 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:48.915 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:48.915 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:48.915 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:48.915 11:07:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:49.174 [2024-07-25 11:07:56.102635] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:49.174 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:49.174 "name": "Existed_Raid", 00:25:49.174 "aliases": [ 00:25:49.174 "e1472536-c4ce-4977-9d5e-caaa47e85b29" 00:25:49.174 ], 00:25:49.174 "product_name": "Raid Volume", 00:25:49.174 "block_size": 512, 00:25:49.174 "num_blocks": 65536, 00:25:49.174 "uuid": "e1472536-c4ce-4977-9d5e-caaa47e85b29", 00:25:49.174 "assigned_rate_limits": { 00:25:49.174 "rw_ios_per_sec": 0, 00:25:49.174 "rw_mbytes_per_sec": 0, 00:25:49.174 "r_mbytes_per_sec": 0, 00:25:49.174 "w_mbytes_per_sec": 0 00:25:49.174 }, 00:25:49.174 "claimed": false, 00:25:49.174 "zoned": false, 00:25:49.174 "supported_io_types": { 00:25:49.174 "read": true, 00:25:49.174 "write": true, 00:25:49.174 "unmap": false, 00:25:49.174 "flush": false, 00:25:49.174 "reset": true, 00:25:49.174 "nvme_admin": false, 00:25:49.174 "nvme_io": false, 00:25:49.174 "nvme_io_md": false, 00:25:49.174 "write_zeroes": true, 00:25:49.174 "zcopy": false, 00:25:49.174 "get_zone_info": false, 00:25:49.174 "zone_management": false, 00:25:49.174 "zone_append": false, 00:25:49.174 "compare": false, 00:25:49.174 "compare_and_write": false, 00:25:49.174 "abort": false, 00:25:49.174 "seek_hole": false, 00:25:49.174 "seek_data": false, 00:25:49.174 "copy": false, 00:25:49.174 "nvme_iov_md": false 00:25:49.174 }, 00:25:49.174 "memory_domains": [ 00:25:49.174 { 00:25:49.174 "dma_device_id": "system", 00:25:49.174 "dma_device_type": 1 00:25:49.174 }, 00:25:49.174 { 00:25:49.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.174 "dma_device_type": 2 00:25:49.174 }, 00:25:49.174 { 00:25:49.174 "dma_device_id": "system", 00:25:49.174 "dma_device_type": 1 00:25:49.174 }, 00:25:49.174 { 00:25:49.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.174 "dma_device_type": 2 00:25:49.174 }, 00:25:49.174 { 00:25:49.174 "dma_device_id": "system", 00:25:49.174 "dma_device_type": 1 00:25:49.174 }, 00:25:49.174 { 00:25:49.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.174 "dma_device_type": 2 00:25:49.174 }, 00:25:49.174 { 00:25:49.174 "dma_device_id": "system", 00:25:49.174 "dma_device_type": 1 00:25:49.174 }, 00:25:49.174 { 00:25:49.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.174 "dma_device_type": 2 00:25:49.174 } 00:25:49.174 ], 00:25:49.174 "driver_specific": { 00:25:49.174 "raid": { 00:25:49.174 "uuid": "e1472536-c4ce-4977-9d5e-caaa47e85b29", 00:25:49.174 "strip_size_kb": 0, 00:25:49.174 "state": "online", 00:25:49.174 "raid_level": "raid1", 00:25:49.174 "superblock": false, 00:25:49.174 "num_base_bdevs": 4, 00:25:49.174 "num_base_bdevs_discovered": 4, 00:25:49.174 "num_base_bdevs_operational": 4, 00:25:49.174 "base_bdevs_list": [ 00:25:49.174 { 00:25:49.174 "name": "NewBaseBdev", 00:25:49.174 "uuid": "90bbd846-dcde-4288-8220-bc6cd4283ccd", 00:25:49.174 "is_configured": true, 00:25:49.174 "data_offset": 0, 00:25:49.174 "data_size": 65536 00:25:49.174 }, 00:25:49.174 { 00:25:49.174 "name": "BaseBdev2", 00:25:49.174 "uuid": "c1fff99f-8083-412a-8802-d8a578c0c657", 00:25:49.174 "is_configured": true, 00:25:49.174 "data_offset": 0, 00:25:49.174 "data_size": 65536 00:25:49.174 }, 00:25:49.174 { 00:25:49.174 "name": "BaseBdev3", 00:25:49.174 "uuid": "303ec2f2-2879-450c-80e8-b0888fa0de0d", 00:25:49.174 "is_configured": true, 00:25:49.174 "data_offset": 0, 00:25:49.174 "data_size": 65536 00:25:49.174 }, 00:25:49.174 { 00:25:49.174 "name": "BaseBdev4", 00:25:49.174 "uuid": "3e24c86a-decd-4f8e-9a28-7a3221cba46a", 00:25:49.174 "is_configured": true, 00:25:49.174 "data_offset": 0, 00:25:49.174 "data_size": 65536 00:25:49.174 } 00:25:49.174 ] 00:25:49.174 } 00:25:49.174 } 00:25:49.174 }' 00:25:49.174 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:49.174 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:49.174 BaseBdev2 00:25:49.174 BaseBdev3 00:25:49.174 BaseBdev4' 00:25:49.174 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:49.174 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:49.174 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:49.433 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:49.433 "name": "NewBaseBdev", 00:25:49.433 "aliases": [ 00:25:49.433 "90bbd846-dcde-4288-8220-bc6cd4283ccd" 00:25:49.433 ], 00:25:49.433 "product_name": "Malloc disk", 00:25:49.433 "block_size": 512, 00:25:49.433 "num_blocks": 65536, 00:25:49.433 "uuid": "90bbd846-dcde-4288-8220-bc6cd4283ccd", 00:25:49.433 "assigned_rate_limits": { 00:25:49.433 "rw_ios_per_sec": 0, 00:25:49.433 "rw_mbytes_per_sec": 0, 00:25:49.433 "r_mbytes_per_sec": 0, 00:25:49.433 "w_mbytes_per_sec": 0 00:25:49.433 }, 00:25:49.433 "claimed": true, 00:25:49.433 "claim_type": "exclusive_write", 00:25:49.433 "zoned": false, 00:25:49.433 "supported_io_types": { 00:25:49.433 "read": true, 00:25:49.433 "write": true, 00:25:49.433 "unmap": true, 00:25:49.433 "flush": true, 00:25:49.433 "reset": true, 00:25:49.433 "nvme_admin": false, 00:25:49.433 "nvme_io": false, 00:25:49.433 "nvme_io_md": false, 00:25:49.433 "write_zeroes": true, 00:25:49.433 "zcopy": true, 00:25:49.433 "get_zone_info": false, 00:25:49.433 "zone_management": false, 00:25:49.433 "zone_append": false, 00:25:49.433 "compare": false, 00:25:49.433 "compare_and_write": false, 00:25:49.433 "abort": true, 00:25:49.433 "seek_hole": false, 00:25:49.433 "seek_data": false, 00:25:49.433 "copy": true, 00:25:49.433 "nvme_iov_md": false 00:25:49.433 }, 00:25:49.433 "memory_domains": [ 00:25:49.433 { 00:25:49.434 "dma_device_id": "system", 00:25:49.434 "dma_device_type": 1 00:25:49.434 }, 00:25:49.434 { 00:25:49.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.434 "dma_device_type": 2 00:25:49.434 } 00:25:49.434 ], 00:25:49.434 "driver_specific": {} 00:25:49.434 }' 00:25:49.434 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.434 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.434 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:49.434 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.434 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.692 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:49.692 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.692 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.692 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:49.692 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.692 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.692 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:49.692 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:49.692 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:49.692 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:49.951 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:49.951 "name": "BaseBdev2", 00:25:49.951 "aliases": [ 00:25:49.951 "c1fff99f-8083-412a-8802-d8a578c0c657" 00:25:49.951 ], 00:25:49.951 "product_name": "Malloc disk", 00:25:49.951 "block_size": 512, 00:25:49.951 "num_blocks": 65536, 00:25:49.951 "uuid": "c1fff99f-8083-412a-8802-d8a578c0c657", 00:25:49.951 "assigned_rate_limits": { 00:25:49.951 "rw_ios_per_sec": 0, 00:25:49.951 "rw_mbytes_per_sec": 0, 00:25:49.951 "r_mbytes_per_sec": 0, 00:25:49.951 "w_mbytes_per_sec": 0 00:25:49.951 }, 00:25:49.951 "claimed": true, 00:25:49.951 "claim_type": "exclusive_write", 00:25:49.951 "zoned": false, 00:25:49.951 "supported_io_types": { 00:25:49.951 "read": true, 00:25:49.951 "write": true, 00:25:49.951 "unmap": true, 00:25:49.951 "flush": true, 00:25:49.951 "reset": true, 00:25:49.951 "nvme_admin": false, 00:25:49.951 "nvme_io": false, 00:25:49.951 "nvme_io_md": false, 00:25:49.951 "write_zeroes": true, 00:25:49.951 "zcopy": true, 00:25:49.951 "get_zone_info": false, 00:25:49.951 "zone_management": false, 00:25:49.951 "zone_append": false, 00:25:49.951 "compare": false, 00:25:49.951 "compare_and_write": false, 00:25:49.951 "abort": true, 00:25:49.951 "seek_hole": false, 00:25:49.951 "seek_data": false, 00:25:49.951 "copy": true, 00:25:49.951 "nvme_iov_md": false 00:25:49.951 }, 00:25:49.951 "memory_domains": [ 00:25:49.951 { 00:25:49.951 "dma_device_id": "system", 00:25:49.951 "dma_device_type": 1 00:25:49.951 }, 00:25:49.951 { 00:25:49.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.951 "dma_device_type": 2 00:25:49.951 } 00:25:49.951 ], 00:25:49.951 "driver_specific": {} 00:25:49.951 }' 00:25:49.951 11:07:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.951 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.951 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:49.951 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.209 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.209 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:50.209 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.209 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.209 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:50.209 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.209 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.209 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:50.209 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:50.209 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:50.209 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:50.468 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:50.468 "name": "BaseBdev3", 00:25:50.468 "aliases": [ 00:25:50.468 "303ec2f2-2879-450c-80e8-b0888fa0de0d" 00:25:50.468 ], 00:25:50.468 "product_name": "Malloc disk", 00:25:50.468 "block_size": 512, 00:25:50.468 "num_blocks": 65536, 00:25:50.468 "uuid": "303ec2f2-2879-450c-80e8-b0888fa0de0d", 00:25:50.468 "assigned_rate_limits": { 00:25:50.468 "rw_ios_per_sec": 0, 00:25:50.468 "rw_mbytes_per_sec": 0, 00:25:50.468 "r_mbytes_per_sec": 0, 00:25:50.468 "w_mbytes_per_sec": 0 00:25:50.468 }, 00:25:50.468 "claimed": true, 00:25:50.468 "claim_type": "exclusive_write", 00:25:50.468 "zoned": false, 00:25:50.468 "supported_io_types": { 00:25:50.468 "read": true, 00:25:50.468 "write": true, 00:25:50.468 "unmap": true, 00:25:50.468 "flush": true, 00:25:50.468 "reset": true, 00:25:50.468 "nvme_admin": false, 00:25:50.468 "nvme_io": false, 00:25:50.468 "nvme_io_md": false, 00:25:50.468 "write_zeroes": true, 00:25:50.468 "zcopy": true, 00:25:50.468 "get_zone_info": false, 00:25:50.468 "zone_management": false, 00:25:50.468 "zone_append": false, 00:25:50.468 "compare": false, 00:25:50.468 "compare_and_write": false, 00:25:50.468 "abort": true, 00:25:50.468 "seek_hole": false, 00:25:50.468 "seek_data": false, 00:25:50.468 "copy": true, 00:25:50.468 "nvme_iov_md": false 00:25:50.468 }, 00:25:50.468 "memory_domains": [ 00:25:50.468 { 00:25:50.468 "dma_device_id": "system", 00:25:50.468 "dma_device_type": 1 00:25:50.468 }, 00:25:50.468 { 00:25:50.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.468 "dma_device_type": 2 00:25:50.468 } 00:25:50.468 ], 00:25:50.468 "driver_specific": {} 00:25:50.468 }' 00:25:50.468 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.468 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.726 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:50.726 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.727 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.727 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:50.727 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.727 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.727 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:50.727 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.727 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.984 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:50.984 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:50.985 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:50.985 11:07:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:51.243 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:51.243 "name": "BaseBdev4", 00:25:51.243 "aliases": [ 00:25:51.243 "3e24c86a-decd-4f8e-9a28-7a3221cba46a" 00:25:51.243 ], 00:25:51.243 "product_name": "Malloc disk", 00:25:51.243 "block_size": 512, 00:25:51.243 "num_blocks": 65536, 00:25:51.243 "uuid": "3e24c86a-decd-4f8e-9a28-7a3221cba46a", 00:25:51.243 "assigned_rate_limits": { 00:25:51.243 "rw_ios_per_sec": 0, 00:25:51.243 "rw_mbytes_per_sec": 0, 00:25:51.243 "r_mbytes_per_sec": 0, 00:25:51.243 "w_mbytes_per_sec": 0 00:25:51.243 }, 00:25:51.243 "claimed": true, 00:25:51.243 "claim_type": "exclusive_write", 00:25:51.243 "zoned": false, 00:25:51.243 "supported_io_types": { 00:25:51.243 "read": true, 00:25:51.243 "write": true, 00:25:51.243 "unmap": true, 00:25:51.243 "flush": true, 00:25:51.243 "reset": true, 00:25:51.243 "nvme_admin": false, 00:25:51.243 "nvme_io": false, 00:25:51.243 "nvme_io_md": false, 00:25:51.243 "write_zeroes": true, 00:25:51.243 "zcopy": true, 00:25:51.243 "get_zone_info": false, 00:25:51.243 "zone_management": false, 00:25:51.243 "zone_append": false, 00:25:51.243 "compare": false, 00:25:51.243 "compare_and_write": false, 00:25:51.243 "abort": true, 00:25:51.243 "seek_hole": false, 00:25:51.243 "seek_data": false, 00:25:51.243 "copy": true, 00:25:51.243 "nvme_iov_md": false 00:25:51.243 }, 00:25:51.243 "memory_domains": [ 00:25:51.243 { 00:25:51.243 "dma_device_id": "system", 00:25:51.243 "dma_device_type": 1 00:25:51.243 }, 00:25:51.243 { 00:25:51.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:51.243 "dma_device_type": 2 00:25:51.243 } 00:25:51.243 ], 00:25:51.243 "driver_specific": {} 00:25:51.243 }' 00:25:51.243 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:51.243 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:51.243 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:51.243 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:51.243 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:51.243 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:51.243 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:51.243 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:51.502 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:51.502 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:51.502 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:51.502 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:51.502 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:51.761 [2024-07-25 11:07:58.661162] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:51.761 [2024-07-25 11:07:58.661193] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:51.761 [2024-07-25 11:07:58.661279] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:51.761 [2024-07-25 11:07:58.661616] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:51.761 [2024-07-25 11:07:58.661636] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:25:51.761 11:07:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 3673952 00:25:51.761 11:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 3673952 ']' 00:25:51.761 11:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 3673952 00:25:51.761 11:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:25:51.761 11:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:51.761 11:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3673952 00:25:51.761 11:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:51.761 11:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:51.761 11:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3673952' 00:25:51.761 killing process with pid 3673952 00:25:51.761 11:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 3673952 00:25:51.761 [2024-07-25 11:07:58.740050] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:51.761 11:07:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 3673952 00:25:52.329 [2024-07-25 11:07:59.205974] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:54.234 11:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:25:54.234 00:25:54.234 real 0m33.303s 00:25:54.234 user 0m58.169s 00:25:54.234 sys 0m5.841s 00:25:54.234 11:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:54.235 11:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.235 ************************************ 00:25:54.235 END TEST raid_state_function_test 00:25:54.235 ************************************ 00:25:54.235 11:08:00 bdev_raid -- bdev/bdev_raid.sh@948 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:25:54.235 11:08:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:54.235 11:08:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:54.235 11:08:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:54.235 ************************************ 00:25:54.235 START TEST raid_state_function_test_sb 00:25:54.235 ************************************ 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=3680138 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3680138' 00:25:54.235 Process raid pid: 3680138 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 3680138 /var/tmp/spdk-raid.sock 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3680138 ']' 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:54.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:54.235 11:08:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:54.235 [2024-07-25 11:08:01.137423] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:54.235 [2024-07-25 11:08:01.137542] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:01.0 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:01.1 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:01.2 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:01.3 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:01.4 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:01.5 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:01.6 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:01.7 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:02.0 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:02.1 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:02.2 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:02.3 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:02.4 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:02.5 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:02.6 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3d:02.7 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:01.0 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:01.1 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:01.2 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:01.3 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:01.4 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:01.5 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:01.6 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:01.7 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:02.0 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:02.1 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:02.2 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:02.3 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:02.4 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:02.5 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:02.6 cannot be used 00:25:54.235 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:25:54.235 EAL: Requested device 0000:3f:02.7 cannot be used 00:25:54.494 [2024-07-25 11:08:01.367413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.752 [2024-07-25 11:08:01.645565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.011 [2024-07-25 11:08:01.956661] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:55.011 [2024-07-25 11:08:01.956699] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:55.011 11:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:55.011 11:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:25:55.011 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:55.270 [2024-07-25 11:08:02.336724] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:55.270 [2024-07-25 11:08:02.336778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:55.270 [2024-07-25 11:08:02.336793] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:55.270 [2024-07-25 11:08:02.336810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:55.271 [2024-07-25 11:08:02.336821] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:55.271 [2024-07-25 11:08:02.336837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:55.271 [2024-07-25 11:08:02.336848] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:55.271 [2024-07-25 11:08:02.336863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:55.271 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:55.271 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:55.271 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:55.271 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:55.271 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:55.271 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:55.271 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:55.271 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:55.271 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:55.271 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:55.271 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.271 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.530 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:55.530 "name": "Existed_Raid", 00:25:55.530 "uuid": "2e4d66a3-5267-4c20-9eeb-65ebf71441c1", 00:25:55.530 "strip_size_kb": 0, 00:25:55.530 "state": "configuring", 00:25:55.530 "raid_level": "raid1", 00:25:55.530 "superblock": true, 00:25:55.530 "num_base_bdevs": 4, 00:25:55.530 "num_base_bdevs_discovered": 0, 00:25:55.530 "num_base_bdevs_operational": 4, 00:25:55.530 "base_bdevs_list": [ 00:25:55.530 { 00:25:55.530 "name": "BaseBdev1", 00:25:55.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.530 "is_configured": false, 00:25:55.530 "data_offset": 0, 00:25:55.530 "data_size": 0 00:25:55.530 }, 00:25:55.530 { 00:25:55.530 "name": "BaseBdev2", 00:25:55.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.530 "is_configured": false, 00:25:55.530 "data_offset": 0, 00:25:55.530 "data_size": 0 00:25:55.530 }, 00:25:55.530 { 00:25:55.530 "name": "BaseBdev3", 00:25:55.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.530 "is_configured": false, 00:25:55.530 "data_offset": 0, 00:25:55.530 "data_size": 0 00:25:55.530 }, 00:25:55.530 { 00:25:55.530 "name": "BaseBdev4", 00:25:55.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.530 "is_configured": false, 00:25:55.530 "data_offset": 0, 00:25:55.530 "data_size": 0 00:25:55.530 } 00:25:55.530 ] 00:25:55.530 }' 00:25:55.530 11:08:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:55.530 11:08:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:56.098 11:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:56.357 [2024-07-25 11:08:03.359468] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:56.357 [2024-07-25 11:08:03.359512] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:25:56.357 11:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:56.616 [2024-07-25 11:08:03.588130] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:56.616 [2024-07-25 11:08:03.588183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:56.616 [2024-07-25 11:08:03.588197] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:56.616 [2024-07-25 11:08:03.588221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:56.616 [2024-07-25 11:08:03.588232] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:56.616 [2024-07-25 11:08:03.588247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:56.616 [2024-07-25 11:08:03.588258] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:56.616 [2024-07-25 11:08:03.588274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:56.616 11:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:56.874 [2024-07-25 11:08:03.863466] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:56.874 BaseBdev1 00:25:56.874 11:08:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:56.874 11:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:25:56.874 11:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:56.874 11:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:56.874 11:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:56.874 11:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:56.874 11:08:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:57.195 11:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:57.454 [ 00:25:57.454 { 00:25:57.454 "name": "BaseBdev1", 00:25:57.454 "aliases": [ 00:25:57.454 "103da975-54ae-49a3-918e-fddb44000f8d" 00:25:57.454 ], 00:25:57.454 "product_name": "Malloc disk", 00:25:57.454 "block_size": 512, 00:25:57.454 "num_blocks": 65536, 00:25:57.454 "uuid": "103da975-54ae-49a3-918e-fddb44000f8d", 00:25:57.454 "assigned_rate_limits": { 00:25:57.454 "rw_ios_per_sec": 0, 00:25:57.454 "rw_mbytes_per_sec": 0, 00:25:57.454 "r_mbytes_per_sec": 0, 00:25:57.454 "w_mbytes_per_sec": 0 00:25:57.454 }, 00:25:57.454 "claimed": true, 00:25:57.454 "claim_type": "exclusive_write", 00:25:57.454 "zoned": false, 00:25:57.454 "supported_io_types": { 00:25:57.454 "read": true, 00:25:57.454 "write": true, 00:25:57.454 "unmap": true, 00:25:57.454 "flush": true, 00:25:57.454 "reset": true, 00:25:57.454 "nvme_admin": false, 00:25:57.454 "nvme_io": false, 00:25:57.454 "nvme_io_md": false, 00:25:57.454 "write_zeroes": true, 00:25:57.454 "zcopy": true, 00:25:57.454 "get_zone_info": false, 00:25:57.454 "zone_management": false, 00:25:57.454 "zone_append": false, 00:25:57.454 "compare": false, 00:25:57.454 "compare_and_write": false, 00:25:57.454 "abort": true, 00:25:57.454 "seek_hole": false, 00:25:57.454 "seek_data": false, 00:25:57.454 "copy": true, 00:25:57.454 "nvme_iov_md": false 00:25:57.454 }, 00:25:57.454 "memory_domains": [ 00:25:57.454 { 00:25:57.454 "dma_device_id": "system", 00:25:57.454 "dma_device_type": 1 00:25:57.454 }, 00:25:57.454 { 00:25:57.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:57.454 "dma_device_type": 2 00:25:57.454 } 00:25:57.454 ], 00:25:57.454 "driver_specific": {} 00:25:57.454 } 00:25:57.454 ] 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.454 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.454 "name": "Existed_Raid", 00:25:57.454 "uuid": "b958118e-873e-456d-b8ed-4730893bc282", 00:25:57.454 "strip_size_kb": 0, 00:25:57.455 "state": "configuring", 00:25:57.455 "raid_level": "raid1", 00:25:57.455 "superblock": true, 00:25:57.455 "num_base_bdevs": 4, 00:25:57.455 "num_base_bdevs_discovered": 1, 00:25:57.455 "num_base_bdevs_operational": 4, 00:25:57.455 "base_bdevs_list": [ 00:25:57.455 { 00:25:57.455 "name": "BaseBdev1", 00:25:57.455 "uuid": "103da975-54ae-49a3-918e-fddb44000f8d", 00:25:57.455 "is_configured": true, 00:25:57.455 "data_offset": 2048, 00:25:57.455 "data_size": 63488 00:25:57.455 }, 00:25:57.455 { 00:25:57.455 "name": "BaseBdev2", 00:25:57.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.455 "is_configured": false, 00:25:57.455 "data_offset": 0, 00:25:57.455 "data_size": 0 00:25:57.455 }, 00:25:57.455 { 00:25:57.455 "name": "BaseBdev3", 00:25:57.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.455 "is_configured": false, 00:25:57.455 "data_offset": 0, 00:25:57.455 "data_size": 0 00:25:57.455 }, 00:25:57.455 { 00:25:57.455 "name": "BaseBdev4", 00:25:57.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.455 "is_configured": false, 00:25:57.455 "data_offset": 0, 00:25:57.455 "data_size": 0 00:25:57.455 } 00:25:57.455 ] 00:25:57.455 }' 00:25:57.455 11:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.455 11:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:58.392 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:58.392 [2024-07-25 11:08:05.355518] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:58.392 [2024-07-25 11:08:05.355572] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:25:58.392 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:58.651 [2024-07-25 11:08:05.584243] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:58.651 [2024-07-25 11:08:05.586576] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:58.651 [2024-07-25 11:08:05.586619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:58.651 [2024-07-25 11:08:05.586633] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:58.651 [2024-07-25 11:08:05.586654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:58.651 [2024-07-25 11:08:05.586666] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:58.651 [2024-07-25 11:08:05.586684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.651 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:58.910 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.910 "name": "Existed_Raid", 00:25:58.910 "uuid": "a642d4ee-3151-4dd3-95f4-4633e8704794", 00:25:58.910 "strip_size_kb": 0, 00:25:58.910 "state": "configuring", 00:25:58.910 "raid_level": "raid1", 00:25:58.910 "superblock": true, 00:25:58.910 "num_base_bdevs": 4, 00:25:58.910 "num_base_bdevs_discovered": 1, 00:25:58.910 "num_base_bdevs_operational": 4, 00:25:58.910 "base_bdevs_list": [ 00:25:58.910 { 00:25:58.910 "name": "BaseBdev1", 00:25:58.910 "uuid": "103da975-54ae-49a3-918e-fddb44000f8d", 00:25:58.910 "is_configured": true, 00:25:58.910 "data_offset": 2048, 00:25:58.910 "data_size": 63488 00:25:58.910 }, 00:25:58.910 { 00:25:58.910 "name": "BaseBdev2", 00:25:58.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.910 "is_configured": false, 00:25:58.910 "data_offset": 0, 00:25:58.910 "data_size": 0 00:25:58.910 }, 00:25:58.910 { 00:25:58.910 "name": "BaseBdev3", 00:25:58.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.910 "is_configured": false, 00:25:58.910 "data_offset": 0, 00:25:58.910 "data_size": 0 00:25:58.910 }, 00:25:58.910 { 00:25:58.910 "name": "BaseBdev4", 00:25:58.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.910 "is_configured": false, 00:25:58.910 "data_offset": 0, 00:25:58.910 "data_size": 0 00:25:58.910 } 00:25:58.910 ] 00:25:58.910 }' 00:25:58.910 11:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.910 11:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:59.478 11:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:59.737 [2024-07-25 11:08:06.680133] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:59.737 BaseBdev2 00:25:59.737 11:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:59.737 11:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:25:59.737 11:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:25:59.737 11:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:25:59.737 11:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:25:59.737 11:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:25:59.737 11:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:59.997 11:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:00.256 [ 00:26:00.256 { 00:26:00.256 "name": "BaseBdev2", 00:26:00.256 "aliases": [ 00:26:00.256 "ac48db84-e626-4a2b-b65b-69dec60d33c8" 00:26:00.256 ], 00:26:00.256 "product_name": "Malloc disk", 00:26:00.256 "block_size": 512, 00:26:00.256 "num_blocks": 65536, 00:26:00.256 "uuid": "ac48db84-e626-4a2b-b65b-69dec60d33c8", 00:26:00.256 "assigned_rate_limits": { 00:26:00.256 "rw_ios_per_sec": 0, 00:26:00.256 "rw_mbytes_per_sec": 0, 00:26:00.256 "r_mbytes_per_sec": 0, 00:26:00.256 "w_mbytes_per_sec": 0 00:26:00.256 }, 00:26:00.256 "claimed": true, 00:26:00.256 "claim_type": "exclusive_write", 00:26:00.256 "zoned": false, 00:26:00.256 "supported_io_types": { 00:26:00.256 "read": true, 00:26:00.256 "write": true, 00:26:00.256 "unmap": true, 00:26:00.256 "flush": true, 00:26:00.256 "reset": true, 00:26:00.256 "nvme_admin": false, 00:26:00.256 "nvme_io": false, 00:26:00.256 "nvme_io_md": false, 00:26:00.256 "write_zeroes": true, 00:26:00.256 "zcopy": true, 00:26:00.256 "get_zone_info": false, 00:26:00.256 "zone_management": false, 00:26:00.256 "zone_append": false, 00:26:00.256 "compare": false, 00:26:00.256 "compare_and_write": false, 00:26:00.256 "abort": true, 00:26:00.256 "seek_hole": false, 00:26:00.256 "seek_data": false, 00:26:00.256 "copy": true, 00:26:00.256 "nvme_iov_md": false 00:26:00.256 }, 00:26:00.256 "memory_domains": [ 00:26:00.256 { 00:26:00.256 "dma_device_id": "system", 00:26:00.256 "dma_device_type": 1 00:26:00.256 }, 00:26:00.256 { 00:26:00.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.256 "dma_device_type": 2 00:26:00.256 } 00:26:00.256 ], 00:26:00.256 "driver_specific": {} 00:26:00.256 } 00:26:00.256 ] 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:00.256 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:00.515 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:00.515 "name": "Existed_Raid", 00:26:00.515 "uuid": "a642d4ee-3151-4dd3-95f4-4633e8704794", 00:26:00.515 "strip_size_kb": 0, 00:26:00.515 "state": "configuring", 00:26:00.515 "raid_level": "raid1", 00:26:00.515 "superblock": true, 00:26:00.515 "num_base_bdevs": 4, 00:26:00.515 "num_base_bdevs_discovered": 2, 00:26:00.515 "num_base_bdevs_operational": 4, 00:26:00.515 "base_bdevs_list": [ 00:26:00.515 { 00:26:00.515 "name": "BaseBdev1", 00:26:00.515 "uuid": "103da975-54ae-49a3-918e-fddb44000f8d", 00:26:00.515 "is_configured": true, 00:26:00.515 "data_offset": 2048, 00:26:00.515 "data_size": 63488 00:26:00.515 }, 00:26:00.515 { 00:26:00.515 "name": "BaseBdev2", 00:26:00.515 "uuid": "ac48db84-e626-4a2b-b65b-69dec60d33c8", 00:26:00.515 "is_configured": true, 00:26:00.515 "data_offset": 2048, 00:26:00.515 "data_size": 63488 00:26:00.515 }, 00:26:00.515 { 00:26:00.515 "name": "BaseBdev3", 00:26:00.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.515 "is_configured": false, 00:26:00.515 "data_offset": 0, 00:26:00.515 "data_size": 0 00:26:00.515 }, 00:26:00.515 { 00:26:00.515 "name": "BaseBdev4", 00:26:00.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:00.515 "is_configured": false, 00:26:00.515 "data_offset": 0, 00:26:00.515 "data_size": 0 00:26:00.515 } 00:26:00.515 ] 00:26:00.515 }' 00:26:00.515 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:00.516 11:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:01.083 11:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:01.083 [2024-07-25 11:08:08.143749] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:01.083 BaseBdev3 00:26:01.083 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:01.083 11:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:26:01.083 11:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:01.083 11:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:01.083 11:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:01.083 11:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:01.083 11:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:01.340 11:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:01.599 [ 00:26:01.599 { 00:26:01.599 "name": "BaseBdev3", 00:26:01.599 "aliases": [ 00:26:01.599 "f23c9e9f-559c-43c9-84cf-241f0766f4ec" 00:26:01.599 ], 00:26:01.599 "product_name": "Malloc disk", 00:26:01.599 "block_size": 512, 00:26:01.599 "num_blocks": 65536, 00:26:01.599 "uuid": "f23c9e9f-559c-43c9-84cf-241f0766f4ec", 00:26:01.599 "assigned_rate_limits": { 00:26:01.599 "rw_ios_per_sec": 0, 00:26:01.599 "rw_mbytes_per_sec": 0, 00:26:01.599 "r_mbytes_per_sec": 0, 00:26:01.599 "w_mbytes_per_sec": 0 00:26:01.599 }, 00:26:01.599 "claimed": true, 00:26:01.599 "claim_type": "exclusive_write", 00:26:01.599 "zoned": false, 00:26:01.599 "supported_io_types": { 00:26:01.599 "read": true, 00:26:01.599 "write": true, 00:26:01.599 "unmap": true, 00:26:01.599 "flush": true, 00:26:01.599 "reset": true, 00:26:01.599 "nvme_admin": false, 00:26:01.599 "nvme_io": false, 00:26:01.599 "nvme_io_md": false, 00:26:01.599 "write_zeroes": true, 00:26:01.599 "zcopy": true, 00:26:01.599 "get_zone_info": false, 00:26:01.599 "zone_management": false, 00:26:01.599 "zone_append": false, 00:26:01.599 "compare": false, 00:26:01.599 "compare_and_write": false, 00:26:01.599 "abort": true, 00:26:01.599 "seek_hole": false, 00:26:01.599 "seek_data": false, 00:26:01.599 "copy": true, 00:26:01.599 "nvme_iov_md": false 00:26:01.599 }, 00:26:01.599 "memory_domains": [ 00:26:01.599 { 00:26:01.599 "dma_device_id": "system", 00:26:01.599 "dma_device_type": 1 00:26:01.599 }, 00:26:01.599 { 00:26:01.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.599 "dma_device_type": 2 00:26:01.599 } 00:26:01.599 ], 00:26:01.599 "driver_specific": {} 00:26:01.599 } 00:26:01.599 ] 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:01.599 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:01.858 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:01.858 "name": "Existed_Raid", 00:26:01.858 "uuid": "a642d4ee-3151-4dd3-95f4-4633e8704794", 00:26:01.858 "strip_size_kb": 0, 00:26:01.858 "state": "configuring", 00:26:01.858 "raid_level": "raid1", 00:26:01.858 "superblock": true, 00:26:01.858 "num_base_bdevs": 4, 00:26:01.858 "num_base_bdevs_discovered": 3, 00:26:01.858 "num_base_bdevs_operational": 4, 00:26:01.858 "base_bdevs_list": [ 00:26:01.858 { 00:26:01.858 "name": "BaseBdev1", 00:26:01.858 "uuid": "103da975-54ae-49a3-918e-fddb44000f8d", 00:26:01.858 "is_configured": true, 00:26:01.858 "data_offset": 2048, 00:26:01.858 "data_size": 63488 00:26:01.858 }, 00:26:01.858 { 00:26:01.858 "name": "BaseBdev2", 00:26:01.858 "uuid": "ac48db84-e626-4a2b-b65b-69dec60d33c8", 00:26:01.858 "is_configured": true, 00:26:01.858 "data_offset": 2048, 00:26:01.858 "data_size": 63488 00:26:01.858 }, 00:26:01.858 { 00:26:01.858 "name": "BaseBdev3", 00:26:01.858 "uuid": "f23c9e9f-559c-43c9-84cf-241f0766f4ec", 00:26:01.858 "is_configured": true, 00:26:01.858 "data_offset": 2048, 00:26:01.858 "data_size": 63488 00:26:01.858 }, 00:26:01.858 { 00:26:01.858 "name": "BaseBdev4", 00:26:01.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:01.858 "is_configured": false, 00:26:01.858 "data_offset": 0, 00:26:01.858 "data_size": 0 00:26:01.858 } 00:26:01.858 ] 00:26:01.858 }' 00:26:01.858 11:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:01.858 11:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:02.427 11:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:02.427 [2024-07-25 11:08:09.534121] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:02.427 [2024-07-25 11:08:09.534419] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:02.427 [2024-07-25 11:08:09.534444] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:02.427 [2024-07-25 11:08:09.534773] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:26:02.427 [2024-07-25 11:08:09.535013] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:02.427 [2024-07-25 11:08:09.535032] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:02.427 BaseBdev4 00:26:02.427 [2024-07-25 11:08:09.535222] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.687 11:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:26:02.687 11:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:26:02.687 11:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:02.687 11:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:02.687 11:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:02.687 11:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:02.687 11:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:02.687 11:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:02.946 [ 00:26:02.946 { 00:26:02.946 "name": "BaseBdev4", 00:26:02.946 "aliases": [ 00:26:02.946 "12fa1fb6-41c3-4c7e-9483-dcce0c6adc83" 00:26:02.946 ], 00:26:02.946 "product_name": "Malloc disk", 00:26:02.946 "block_size": 512, 00:26:02.946 "num_blocks": 65536, 00:26:02.946 "uuid": "12fa1fb6-41c3-4c7e-9483-dcce0c6adc83", 00:26:02.946 "assigned_rate_limits": { 00:26:02.946 "rw_ios_per_sec": 0, 00:26:02.946 "rw_mbytes_per_sec": 0, 00:26:02.946 "r_mbytes_per_sec": 0, 00:26:02.946 "w_mbytes_per_sec": 0 00:26:02.946 }, 00:26:02.946 "claimed": true, 00:26:02.946 "claim_type": "exclusive_write", 00:26:02.946 "zoned": false, 00:26:02.946 "supported_io_types": { 00:26:02.946 "read": true, 00:26:02.946 "write": true, 00:26:02.946 "unmap": true, 00:26:02.946 "flush": true, 00:26:02.946 "reset": true, 00:26:02.946 "nvme_admin": false, 00:26:02.946 "nvme_io": false, 00:26:02.946 "nvme_io_md": false, 00:26:02.946 "write_zeroes": true, 00:26:02.946 "zcopy": true, 00:26:02.946 "get_zone_info": false, 00:26:02.946 "zone_management": false, 00:26:02.946 "zone_append": false, 00:26:02.946 "compare": false, 00:26:02.946 "compare_and_write": false, 00:26:02.946 "abort": true, 00:26:02.946 "seek_hole": false, 00:26:02.946 "seek_data": false, 00:26:02.946 "copy": true, 00:26:02.946 "nvme_iov_md": false 00:26:02.946 }, 00:26:02.946 "memory_domains": [ 00:26:02.946 { 00:26:02.946 "dma_device_id": "system", 00:26:02.946 "dma_device_type": 1 00:26:02.946 }, 00:26:02.946 { 00:26:02.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.946 "dma_device_type": 2 00:26:02.946 } 00:26:02.946 ], 00:26:02.946 "driver_specific": {} 00:26:02.946 } 00:26:02.946 ] 00:26:02.946 11:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:02.946 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:02.946 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:02.946 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:02.946 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:02.946 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:02.946 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:02.946 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:02.947 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:02.947 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:02.947 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:02.947 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:02.947 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:02.947 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:02.947 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:03.206 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:03.206 "name": "Existed_Raid", 00:26:03.206 "uuid": "a642d4ee-3151-4dd3-95f4-4633e8704794", 00:26:03.206 "strip_size_kb": 0, 00:26:03.206 "state": "online", 00:26:03.206 "raid_level": "raid1", 00:26:03.206 "superblock": true, 00:26:03.206 "num_base_bdevs": 4, 00:26:03.206 "num_base_bdevs_discovered": 4, 00:26:03.206 "num_base_bdevs_operational": 4, 00:26:03.206 "base_bdevs_list": [ 00:26:03.206 { 00:26:03.206 "name": "BaseBdev1", 00:26:03.206 "uuid": "103da975-54ae-49a3-918e-fddb44000f8d", 00:26:03.206 "is_configured": true, 00:26:03.206 "data_offset": 2048, 00:26:03.206 "data_size": 63488 00:26:03.206 }, 00:26:03.206 { 00:26:03.206 "name": "BaseBdev2", 00:26:03.206 "uuid": "ac48db84-e626-4a2b-b65b-69dec60d33c8", 00:26:03.206 "is_configured": true, 00:26:03.206 "data_offset": 2048, 00:26:03.206 "data_size": 63488 00:26:03.206 }, 00:26:03.206 { 00:26:03.206 "name": "BaseBdev3", 00:26:03.206 "uuid": "f23c9e9f-559c-43c9-84cf-241f0766f4ec", 00:26:03.206 "is_configured": true, 00:26:03.206 "data_offset": 2048, 00:26:03.206 "data_size": 63488 00:26:03.206 }, 00:26:03.206 { 00:26:03.206 "name": "BaseBdev4", 00:26:03.206 "uuid": "12fa1fb6-41c3-4c7e-9483-dcce0c6adc83", 00:26:03.206 "is_configured": true, 00:26:03.206 "data_offset": 2048, 00:26:03.206 "data_size": 63488 00:26:03.206 } 00:26:03.206 ] 00:26:03.206 }' 00:26:03.206 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:03.206 11:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:03.775 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:03.775 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:03.775 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:03.775 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:03.775 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:03.775 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:03.775 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:03.775 11:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:04.034 [2024-07-25 11:08:11.030667] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:04.034 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:04.034 "name": "Existed_Raid", 00:26:04.034 "aliases": [ 00:26:04.034 "a642d4ee-3151-4dd3-95f4-4633e8704794" 00:26:04.034 ], 00:26:04.034 "product_name": "Raid Volume", 00:26:04.034 "block_size": 512, 00:26:04.034 "num_blocks": 63488, 00:26:04.034 "uuid": "a642d4ee-3151-4dd3-95f4-4633e8704794", 00:26:04.034 "assigned_rate_limits": { 00:26:04.034 "rw_ios_per_sec": 0, 00:26:04.034 "rw_mbytes_per_sec": 0, 00:26:04.034 "r_mbytes_per_sec": 0, 00:26:04.034 "w_mbytes_per_sec": 0 00:26:04.034 }, 00:26:04.034 "claimed": false, 00:26:04.034 "zoned": false, 00:26:04.034 "supported_io_types": { 00:26:04.034 "read": true, 00:26:04.034 "write": true, 00:26:04.034 "unmap": false, 00:26:04.034 "flush": false, 00:26:04.034 "reset": true, 00:26:04.034 "nvme_admin": false, 00:26:04.034 "nvme_io": false, 00:26:04.034 "nvme_io_md": false, 00:26:04.034 "write_zeroes": true, 00:26:04.034 "zcopy": false, 00:26:04.034 "get_zone_info": false, 00:26:04.034 "zone_management": false, 00:26:04.034 "zone_append": false, 00:26:04.034 "compare": false, 00:26:04.034 "compare_and_write": false, 00:26:04.034 "abort": false, 00:26:04.034 "seek_hole": false, 00:26:04.034 "seek_data": false, 00:26:04.034 "copy": false, 00:26:04.034 "nvme_iov_md": false 00:26:04.034 }, 00:26:04.034 "memory_domains": [ 00:26:04.034 { 00:26:04.034 "dma_device_id": "system", 00:26:04.034 "dma_device_type": 1 00:26:04.034 }, 00:26:04.034 { 00:26:04.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.034 "dma_device_type": 2 00:26:04.034 }, 00:26:04.034 { 00:26:04.034 "dma_device_id": "system", 00:26:04.034 "dma_device_type": 1 00:26:04.034 }, 00:26:04.034 { 00:26:04.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.034 "dma_device_type": 2 00:26:04.034 }, 00:26:04.034 { 00:26:04.034 "dma_device_id": "system", 00:26:04.034 "dma_device_type": 1 00:26:04.034 }, 00:26:04.034 { 00:26:04.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.034 "dma_device_type": 2 00:26:04.034 }, 00:26:04.034 { 00:26:04.034 "dma_device_id": "system", 00:26:04.034 "dma_device_type": 1 00:26:04.034 }, 00:26:04.034 { 00:26:04.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.034 "dma_device_type": 2 00:26:04.034 } 00:26:04.034 ], 00:26:04.034 "driver_specific": { 00:26:04.034 "raid": { 00:26:04.034 "uuid": "a642d4ee-3151-4dd3-95f4-4633e8704794", 00:26:04.034 "strip_size_kb": 0, 00:26:04.034 "state": "online", 00:26:04.034 "raid_level": "raid1", 00:26:04.034 "superblock": true, 00:26:04.034 "num_base_bdevs": 4, 00:26:04.034 "num_base_bdevs_discovered": 4, 00:26:04.034 "num_base_bdevs_operational": 4, 00:26:04.034 "base_bdevs_list": [ 00:26:04.034 { 00:26:04.035 "name": "BaseBdev1", 00:26:04.035 "uuid": "103da975-54ae-49a3-918e-fddb44000f8d", 00:26:04.035 "is_configured": true, 00:26:04.035 "data_offset": 2048, 00:26:04.035 "data_size": 63488 00:26:04.035 }, 00:26:04.035 { 00:26:04.035 "name": "BaseBdev2", 00:26:04.035 "uuid": "ac48db84-e626-4a2b-b65b-69dec60d33c8", 00:26:04.035 "is_configured": true, 00:26:04.035 "data_offset": 2048, 00:26:04.035 "data_size": 63488 00:26:04.035 }, 00:26:04.035 { 00:26:04.035 "name": "BaseBdev3", 00:26:04.035 "uuid": "f23c9e9f-559c-43c9-84cf-241f0766f4ec", 00:26:04.035 "is_configured": true, 00:26:04.035 "data_offset": 2048, 00:26:04.035 "data_size": 63488 00:26:04.035 }, 00:26:04.035 { 00:26:04.035 "name": "BaseBdev4", 00:26:04.035 "uuid": "12fa1fb6-41c3-4c7e-9483-dcce0c6adc83", 00:26:04.035 "is_configured": true, 00:26:04.035 "data_offset": 2048, 00:26:04.035 "data_size": 63488 00:26:04.035 } 00:26:04.035 ] 00:26:04.035 } 00:26:04.035 } 00:26:04.035 }' 00:26:04.035 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:04.035 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:04.035 BaseBdev2 00:26:04.035 BaseBdev3 00:26:04.035 BaseBdev4' 00:26:04.035 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:04.035 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:04.035 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:04.294 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:04.294 "name": "BaseBdev1", 00:26:04.294 "aliases": [ 00:26:04.294 "103da975-54ae-49a3-918e-fddb44000f8d" 00:26:04.294 ], 00:26:04.294 "product_name": "Malloc disk", 00:26:04.294 "block_size": 512, 00:26:04.294 "num_blocks": 65536, 00:26:04.294 "uuid": "103da975-54ae-49a3-918e-fddb44000f8d", 00:26:04.294 "assigned_rate_limits": { 00:26:04.294 "rw_ios_per_sec": 0, 00:26:04.294 "rw_mbytes_per_sec": 0, 00:26:04.294 "r_mbytes_per_sec": 0, 00:26:04.294 "w_mbytes_per_sec": 0 00:26:04.294 }, 00:26:04.294 "claimed": true, 00:26:04.294 "claim_type": "exclusive_write", 00:26:04.294 "zoned": false, 00:26:04.294 "supported_io_types": { 00:26:04.294 "read": true, 00:26:04.294 "write": true, 00:26:04.294 "unmap": true, 00:26:04.294 "flush": true, 00:26:04.294 "reset": true, 00:26:04.294 "nvme_admin": false, 00:26:04.294 "nvme_io": false, 00:26:04.294 "nvme_io_md": false, 00:26:04.294 "write_zeroes": true, 00:26:04.294 "zcopy": true, 00:26:04.294 "get_zone_info": false, 00:26:04.294 "zone_management": false, 00:26:04.294 "zone_append": false, 00:26:04.294 "compare": false, 00:26:04.294 "compare_and_write": false, 00:26:04.294 "abort": true, 00:26:04.294 "seek_hole": false, 00:26:04.294 "seek_data": false, 00:26:04.294 "copy": true, 00:26:04.294 "nvme_iov_md": false 00:26:04.294 }, 00:26:04.294 "memory_domains": [ 00:26:04.294 { 00:26:04.294 "dma_device_id": "system", 00:26:04.294 "dma_device_type": 1 00:26:04.294 }, 00:26:04.294 { 00:26:04.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.294 "dma_device_type": 2 00:26:04.294 } 00:26:04.294 ], 00:26:04.294 "driver_specific": {} 00:26:04.294 }' 00:26:04.294 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:04.294 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:04.294 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:04.294 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:04.553 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:04.553 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:04.553 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:04.553 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:04.553 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:04.553 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:04.553 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:04.553 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:04.553 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:04.553 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:04.553 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:04.812 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:04.812 "name": "BaseBdev2", 00:26:04.812 "aliases": [ 00:26:04.812 "ac48db84-e626-4a2b-b65b-69dec60d33c8" 00:26:04.812 ], 00:26:04.812 "product_name": "Malloc disk", 00:26:04.812 "block_size": 512, 00:26:04.812 "num_blocks": 65536, 00:26:04.812 "uuid": "ac48db84-e626-4a2b-b65b-69dec60d33c8", 00:26:04.812 "assigned_rate_limits": { 00:26:04.812 "rw_ios_per_sec": 0, 00:26:04.812 "rw_mbytes_per_sec": 0, 00:26:04.812 "r_mbytes_per_sec": 0, 00:26:04.812 "w_mbytes_per_sec": 0 00:26:04.812 }, 00:26:04.812 "claimed": true, 00:26:04.812 "claim_type": "exclusive_write", 00:26:04.812 "zoned": false, 00:26:04.812 "supported_io_types": { 00:26:04.812 "read": true, 00:26:04.812 "write": true, 00:26:04.812 "unmap": true, 00:26:04.812 "flush": true, 00:26:04.812 "reset": true, 00:26:04.812 "nvme_admin": false, 00:26:04.812 "nvme_io": false, 00:26:04.812 "nvme_io_md": false, 00:26:04.812 "write_zeroes": true, 00:26:04.812 "zcopy": true, 00:26:04.812 "get_zone_info": false, 00:26:04.812 "zone_management": false, 00:26:04.812 "zone_append": false, 00:26:04.812 "compare": false, 00:26:04.812 "compare_and_write": false, 00:26:04.812 "abort": true, 00:26:04.812 "seek_hole": false, 00:26:04.812 "seek_data": false, 00:26:04.813 "copy": true, 00:26:04.813 "nvme_iov_md": false 00:26:04.813 }, 00:26:04.813 "memory_domains": [ 00:26:04.813 { 00:26:04.813 "dma_device_id": "system", 00:26:04.813 "dma_device_type": 1 00:26:04.813 }, 00:26:04.813 { 00:26:04.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:04.813 "dma_device_type": 2 00:26:04.813 } 00:26:04.813 ], 00:26:04.813 "driver_specific": {} 00:26:04.813 }' 00:26:04.813 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:04.813 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:05.072 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:05.072 11:08:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:05.072 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:05.072 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:05.072 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:05.072 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:05.072 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:05.072 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:05.331 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:05.331 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:05.331 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:05.331 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:05.331 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:05.590 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:05.590 "name": "BaseBdev3", 00:26:05.590 "aliases": [ 00:26:05.590 "f23c9e9f-559c-43c9-84cf-241f0766f4ec" 00:26:05.590 ], 00:26:05.590 "product_name": "Malloc disk", 00:26:05.590 "block_size": 512, 00:26:05.590 "num_blocks": 65536, 00:26:05.590 "uuid": "f23c9e9f-559c-43c9-84cf-241f0766f4ec", 00:26:05.590 "assigned_rate_limits": { 00:26:05.590 "rw_ios_per_sec": 0, 00:26:05.590 "rw_mbytes_per_sec": 0, 00:26:05.590 "r_mbytes_per_sec": 0, 00:26:05.590 "w_mbytes_per_sec": 0 00:26:05.590 }, 00:26:05.590 "claimed": true, 00:26:05.590 "claim_type": "exclusive_write", 00:26:05.590 "zoned": false, 00:26:05.590 "supported_io_types": { 00:26:05.590 "read": true, 00:26:05.590 "write": true, 00:26:05.590 "unmap": true, 00:26:05.590 "flush": true, 00:26:05.590 "reset": true, 00:26:05.590 "nvme_admin": false, 00:26:05.590 "nvme_io": false, 00:26:05.590 "nvme_io_md": false, 00:26:05.590 "write_zeroes": true, 00:26:05.590 "zcopy": true, 00:26:05.590 "get_zone_info": false, 00:26:05.590 "zone_management": false, 00:26:05.590 "zone_append": false, 00:26:05.590 "compare": false, 00:26:05.590 "compare_and_write": false, 00:26:05.590 "abort": true, 00:26:05.590 "seek_hole": false, 00:26:05.590 "seek_data": false, 00:26:05.590 "copy": true, 00:26:05.590 "nvme_iov_md": false 00:26:05.590 }, 00:26:05.590 "memory_domains": [ 00:26:05.590 { 00:26:05.590 "dma_device_id": "system", 00:26:05.590 "dma_device_type": 1 00:26:05.590 }, 00:26:05.590 { 00:26:05.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:05.590 "dma_device_type": 2 00:26:05.590 } 00:26:05.590 ], 00:26:05.590 "driver_specific": {} 00:26:05.590 }' 00:26:05.590 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:05.590 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:05.590 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:05.591 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:05.591 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:05.591 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:05.591 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:05.591 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:05.850 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:05.850 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:05.850 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:05.850 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:05.850 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:05.850 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:05.850 11:08:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:06.109 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:06.110 "name": "BaseBdev4", 00:26:06.110 "aliases": [ 00:26:06.110 "12fa1fb6-41c3-4c7e-9483-dcce0c6adc83" 00:26:06.110 ], 00:26:06.110 "product_name": "Malloc disk", 00:26:06.110 "block_size": 512, 00:26:06.110 "num_blocks": 65536, 00:26:06.110 "uuid": "12fa1fb6-41c3-4c7e-9483-dcce0c6adc83", 00:26:06.110 "assigned_rate_limits": { 00:26:06.110 "rw_ios_per_sec": 0, 00:26:06.110 "rw_mbytes_per_sec": 0, 00:26:06.110 "r_mbytes_per_sec": 0, 00:26:06.110 "w_mbytes_per_sec": 0 00:26:06.110 }, 00:26:06.110 "claimed": true, 00:26:06.110 "claim_type": "exclusive_write", 00:26:06.110 "zoned": false, 00:26:06.110 "supported_io_types": { 00:26:06.110 "read": true, 00:26:06.110 "write": true, 00:26:06.110 "unmap": true, 00:26:06.110 "flush": true, 00:26:06.110 "reset": true, 00:26:06.110 "nvme_admin": false, 00:26:06.110 "nvme_io": false, 00:26:06.110 "nvme_io_md": false, 00:26:06.110 "write_zeroes": true, 00:26:06.110 "zcopy": true, 00:26:06.110 "get_zone_info": false, 00:26:06.110 "zone_management": false, 00:26:06.110 "zone_append": false, 00:26:06.110 "compare": false, 00:26:06.110 "compare_and_write": false, 00:26:06.110 "abort": true, 00:26:06.110 "seek_hole": false, 00:26:06.110 "seek_data": false, 00:26:06.110 "copy": true, 00:26:06.110 "nvme_iov_md": false 00:26:06.110 }, 00:26:06.110 "memory_domains": [ 00:26:06.110 { 00:26:06.110 "dma_device_id": "system", 00:26:06.110 "dma_device_type": 1 00:26:06.110 }, 00:26:06.110 { 00:26:06.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:06.110 "dma_device_type": 2 00:26:06.110 } 00:26:06.110 ], 00:26:06.110 "driver_specific": {} 00:26:06.110 }' 00:26:06.110 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:06.110 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:06.110 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:06.110 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:06.110 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:06.110 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:06.110 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:06.369 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:06.369 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:06.369 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:06.369 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:06.369 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:06.369 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:06.627 [2024-07-25 11:08:13.585498] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:06.627 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.886 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:06.886 "name": "Existed_Raid", 00:26:06.886 "uuid": "a642d4ee-3151-4dd3-95f4-4633e8704794", 00:26:06.886 "strip_size_kb": 0, 00:26:06.886 "state": "online", 00:26:06.886 "raid_level": "raid1", 00:26:06.886 "superblock": true, 00:26:06.886 "num_base_bdevs": 4, 00:26:06.886 "num_base_bdevs_discovered": 3, 00:26:06.886 "num_base_bdevs_operational": 3, 00:26:06.886 "base_bdevs_list": [ 00:26:06.886 { 00:26:06.886 "name": null, 00:26:06.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.886 "is_configured": false, 00:26:06.886 "data_offset": 2048, 00:26:06.886 "data_size": 63488 00:26:06.886 }, 00:26:06.886 { 00:26:06.886 "name": "BaseBdev2", 00:26:06.886 "uuid": "ac48db84-e626-4a2b-b65b-69dec60d33c8", 00:26:06.886 "is_configured": true, 00:26:06.886 "data_offset": 2048, 00:26:06.886 "data_size": 63488 00:26:06.886 }, 00:26:06.886 { 00:26:06.886 "name": "BaseBdev3", 00:26:06.886 "uuid": "f23c9e9f-559c-43c9-84cf-241f0766f4ec", 00:26:06.886 "is_configured": true, 00:26:06.886 "data_offset": 2048, 00:26:06.886 "data_size": 63488 00:26:06.886 }, 00:26:06.886 { 00:26:06.886 "name": "BaseBdev4", 00:26:06.886 "uuid": "12fa1fb6-41c3-4c7e-9483-dcce0c6adc83", 00:26:06.886 "is_configured": true, 00:26:06.886 "data_offset": 2048, 00:26:06.886 "data_size": 63488 00:26:06.886 } 00:26:06.886 ] 00:26:06.886 }' 00:26:06.886 11:08:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:06.886 11:08:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.454 11:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:07.454 11:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:07.454 11:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.454 11:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:07.713 11:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:07.713 11:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:07.713 11:08:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:07.973 [2024-07-25 11:08:14.899712] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:07.973 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:07.973 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:07.973 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.973 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:08.231 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:08.231 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:08.231 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:08.491 [2024-07-25 11:08:15.479427] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:08.751 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:08.751 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:08.751 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.751 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:08.751 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:08.751 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:08.751 11:08:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:09.011 [2024-07-25 11:08:16.054284] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:09.011 [2024-07-25 11:08:16.054392] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:09.270 [2024-07-25 11:08:16.189546] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:09.270 [2024-07-25 11:08:16.189601] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:09.270 [2024-07-25 11:08:16.189619] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:09.270 11:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:09.270 11:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:09.270 11:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.270 11:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:09.530 11:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:09.530 11:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:09.530 11:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:09.530 11:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:09.530 11:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:09.530 11:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:09.789 BaseBdev2 00:26:09.789 11:08:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:09.789 11:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:26:09.789 11:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:09.789 11:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:09.789 11:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:09.789 11:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:09.789 11:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:10.048 11:08:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:10.048 [ 00:26:10.048 { 00:26:10.048 "name": "BaseBdev2", 00:26:10.048 "aliases": [ 00:26:10.048 "cd916c4f-84a3-4c97-b049-1451c9887fa6" 00:26:10.048 ], 00:26:10.048 "product_name": "Malloc disk", 00:26:10.048 "block_size": 512, 00:26:10.048 "num_blocks": 65536, 00:26:10.048 "uuid": "cd916c4f-84a3-4c97-b049-1451c9887fa6", 00:26:10.048 "assigned_rate_limits": { 00:26:10.048 "rw_ios_per_sec": 0, 00:26:10.048 "rw_mbytes_per_sec": 0, 00:26:10.048 "r_mbytes_per_sec": 0, 00:26:10.048 "w_mbytes_per_sec": 0 00:26:10.048 }, 00:26:10.048 "claimed": false, 00:26:10.048 "zoned": false, 00:26:10.048 "supported_io_types": { 00:26:10.048 "read": true, 00:26:10.048 "write": true, 00:26:10.048 "unmap": true, 00:26:10.048 "flush": true, 00:26:10.048 "reset": true, 00:26:10.048 "nvme_admin": false, 00:26:10.048 "nvme_io": false, 00:26:10.048 "nvme_io_md": false, 00:26:10.048 "write_zeroes": true, 00:26:10.048 "zcopy": true, 00:26:10.048 "get_zone_info": false, 00:26:10.048 "zone_management": false, 00:26:10.048 "zone_append": false, 00:26:10.048 "compare": false, 00:26:10.048 "compare_and_write": false, 00:26:10.048 "abort": true, 00:26:10.048 "seek_hole": false, 00:26:10.048 "seek_data": false, 00:26:10.048 "copy": true, 00:26:10.048 "nvme_iov_md": false 00:26:10.048 }, 00:26:10.048 "memory_domains": [ 00:26:10.048 { 00:26:10.048 "dma_device_id": "system", 00:26:10.048 "dma_device_type": 1 00:26:10.048 }, 00:26:10.048 { 00:26:10.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.048 "dma_device_type": 2 00:26:10.048 } 00:26:10.048 ], 00:26:10.048 "driver_specific": {} 00:26:10.048 } 00:26:10.048 ] 00:26:10.048 11:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:10.048 11:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:10.048 11:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:10.048 11:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:10.307 BaseBdev3 00:26:10.566 11:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:10.566 11:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:26:10.566 11:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:10.566 11:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:10.566 11:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:10.566 11:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:10.566 11:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:10.566 11:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:10.825 [ 00:26:10.825 { 00:26:10.825 "name": "BaseBdev3", 00:26:10.825 "aliases": [ 00:26:10.825 "62d3a1f0-c89c-4a54-9be3-7920788a7b4b" 00:26:10.825 ], 00:26:10.825 "product_name": "Malloc disk", 00:26:10.825 "block_size": 512, 00:26:10.825 "num_blocks": 65536, 00:26:10.825 "uuid": "62d3a1f0-c89c-4a54-9be3-7920788a7b4b", 00:26:10.825 "assigned_rate_limits": { 00:26:10.825 "rw_ios_per_sec": 0, 00:26:10.825 "rw_mbytes_per_sec": 0, 00:26:10.825 "r_mbytes_per_sec": 0, 00:26:10.825 "w_mbytes_per_sec": 0 00:26:10.825 }, 00:26:10.825 "claimed": false, 00:26:10.825 "zoned": false, 00:26:10.825 "supported_io_types": { 00:26:10.825 "read": true, 00:26:10.825 "write": true, 00:26:10.825 "unmap": true, 00:26:10.825 "flush": true, 00:26:10.825 "reset": true, 00:26:10.825 "nvme_admin": false, 00:26:10.825 "nvme_io": false, 00:26:10.825 "nvme_io_md": false, 00:26:10.825 "write_zeroes": true, 00:26:10.825 "zcopy": true, 00:26:10.825 "get_zone_info": false, 00:26:10.825 "zone_management": false, 00:26:10.825 "zone_append": false, 00:26:10.825 "compare": false, 00:26:10.825 "compare_and_write": false, 00:26:10.825 "abort": true, 00:26:10.825 "seek_hole": false, 00:26:10.825 "seek_data": false, 00:26:10.825 "copy": true, 00:26:10.825 "nvme_iov_md": false 00:26:10.825 }, 00:26:10.825 "memory_domains": [ 00:26:10.825 { 00:26:10.825 "dma_device_id": "system", 00:26:10.825 "dma_device_type": 1 00:26:10.825 }, 00:26:10.825 { 00:26:10.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:10.825 "dma_device_type": 2 00:26:10.825 } 00:26:10.825 ], 00:26:10.825 "driver_specific": {} 00:26:10.825 } 00:26:10.825 ] 00:26:10.825 11:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:10.825 11:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:10.825 11:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:10.825 11:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:11.084 BaseBdev4 00:26:11.084 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:11.084 11:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:26:11.084 11:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:11.084 11:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:11.084 11:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:11.084 11:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:11.084 11:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:11.343 11:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:11.602 [ 00:26:11.602 { 00:26:11.602 "name": "BaseBdev4", 00:26:11.602 "aliases": [ 00:26:11.602 "8a9e20eb-bd2c-4f6a-ad72-966debf309a2" 00:26:11.602 ], 00:26:11.602 "product_name": "Malloc disk", 00:26:11.602 "block_size": 512, 00:26:11.602 "num_blocks": 65536, 00:26:11.602 "uuid": "8a9e20eb-bd2c-4f6a-ad72-966debf309a2", 00:26:11.602 "assigned_rate_limits": { 00:26:11.602 "rw_ios_per_sec": 0, 00:26:11.602 "rw_mbytes_per_sec": 0, 00:26:11.602 "r_mbytes_per_sec": 0, 00:26:11.602 "w_mbytes_per_sec": 0 00:26:11.602 }, 00:26:11.602 "claimed": false, 00:26:11.602 "zoned": false, 00:26:11.602 "supported_io_types": { 00:26:11.602 "read": true, 00:26:11.602 "write": true, 00:26:11.602 "unmap": true, 00:26:11.602 "flush": true, 00:26:11.602 "reset": true, 00:26:11.602 "nvme_admin": false, 00:26:11.602 "nvme_io": false, 00:26:11.602 "nvme_io_md": false, 00:26:11.602 "write_zeroes": true, 00:26:11.602 "zcopy": true, 00:26:11.602 "get_zone_info": false, 00:26:11.602 "zone_management": false, 00:26:11.602 "zone_append": false, 00:26:11.602 "compare": false, 00:26:11.602 "compare_and_write": false, 00:26:11.602 "abort": true, 00:26:11.602 "seek_hole": false, 00:26:11.602 "seek_data": false, 00:26:11.602 "copy": true, 00:26:11.602 "nvme_iov_md": false 00:26:11.602 }, 00:26:11.602 "memory_domains": [ 00:26:11.602 { 00:26:11.602 "dma_device_id": "system", 00:26:11.602 "dma_device_type": 1 00:26:11.602 }, 00:26:11.602 { 00:26:11.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:11.602 "dma_device_type": 2 00:26:11.602 } 00:26:11.602 ], 00:26:11.602 "driver_specific": {} 00:26:11.602 } 00:26:11.602 ] 00:26:11.602 11:08:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:11.602 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:11.602 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:11.602 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:11.861 [2024-07-25 11:08:18.814247] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:11.861 [2024-07-25 11:08:18.814296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:11.861 [2024-07-25 11:08:18.814324] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:11.861 [2024-07-25 11:08:18.816603] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:11.861 [2024-07-25 11:08:18.816659] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:11.861 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:11.861 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:11.861 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:11.861 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:11.861 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:11.861 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:11.861 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:11.861 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:11.861 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:11.861 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:11.862 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.862 11:08:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.121 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:12.121 "name": "Existed_Raid", 00:26:12.121 "uuid": "eaf624aa-3752-4be7-8825-37a30ed8dfbc", 00:26:12.121 "strip_size_kb": 0, 00:26:12.121 "state": "configuring", 00:26:12.121 "raid_level": "raid1", 00:26:12.121 "superblock": true, 00:26:12.121 "num_base_bdevs": 4, 00:26:12.121 "num_base_bdevs_discovered": 3, 00:26:12.121 "num_base_bdevs_operational": 4, 00:26:12.121 "base_bdevs_list": [ 00:26:12.121 { 00:26:12.121 "name": "BaseBdev1", 00:26:12.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.121 "is_configured": false, 00:26:12.121 "data_offset": 0, 00:26:12.121 "data_size": 0 00:26:12.121 }, 00:26:12.121 { 00:26:12.121 "name": "BaseBdev2", 00:26:12.121 "uuid": "cd916c4f-84a3-4c97-b049-1451c9887fa6", 00:26:12.121 "is_configured": true, 00:26:12.121 "data_offset": 2048, 00:26:12.121 "data_size": 63488 00:26:12.121 }, 00:26:12.121 { 00:26:12.121 "name": "BaseBdev3", 00:26:12.121 "uuid": "62d3a1f0-c89c-4a54-9be3-7920788a7b4b", 00:26:12.121 "is_configured": true, 00:26:12.121 "data_offset": 2048, 00:26:12.121 "data_size": 63488 00:26:12.121 }, 00:26:12.121 { 00:26:12.121 "name": "BaseBdev4", 00:26:12.121 "uuid": "8a9e20eb-bd2c-4f6a-ad72-966debf309a2", 00:26:12.121 "is_configured": true, 00:26:12.121 "data_offset": 2048, 00:26:12.121 "data_size": 63488 00:26:12.121 } 00:26:12.121 ] 00:26:12.121 }' 00:26:12.121 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:12.121 11:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:12.690 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:12.948 [2024-07-25 11:08:19.828928] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:12.948 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:12.948 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:12.948 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:12.948 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:12.948 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:12.948 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:12.948 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:12.948 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:12.948 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:12.948 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:12.948 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:12.948 11:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.948 11:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:12.948 "name": "Existed_Raid", 00:26:12.948 "uuid": "eaf624aa-3752-4be7-8825-37a30ed8dfbc", 00:26:12.948 "strip_size_kb": 0, 00:26:12.948 "state": "configuring", 00:26:12.948 "raid_level": "raid1", 00:26:12.948 "superblock": true, 00:26:12.948 "num_base_bdevs": 4, 00:26:12.948 "num_base_bdevs_discovered": 2, 00:26:12.948 "num_base_bdevs_operational": 4, 00:26:12.948 "base_bdevs_list": [ 00:26:12.948 { 00:26:12.948 "name": "BaseBdev1", 00:26:12.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.948 "is_configured": false, 00:26:12.948 "data_offset": 0, 00:26:12.948 "data_size": 0 00:26:12.948 }, 00:26:12.948 { 00:26:12.948 "name": null, 00:26:12.948 "uuid": "cd916c4f-84a3-4c97-b049-1451c9887fa6", 00:26:12.948 "is_configured": false, 00:26:12.948 "data_offset": 2048, 00:26:12.948 "data_size": 63488 00:26:12.948 }, 00:26:12.948 { 00:26:12.948 "name": "BaseBdev3", 00:26:12.948 "uuid": "62d3a1f0-c89c-4a54-9be3-7920788a7b4b", 00:26:12.948 "is_configured": true, 00:26:12.948 "data_offset": 2048, 00:26:12.948 "data_size": 63488 00:26:12.948 }, 00:26:12.948 { 00:26:12.948 "name": "BaseBdev4", 00:26:12.948 "uuid": "8a9e20eb-bd2c-4f6a-ad72-966debf309a2", 00:26:12.948 "is_configured": true, 00:26:12.948 "data_offset": 2048, 00:26:12.948 "data_size": 63488 00:26:12.948 } 00:26:12.948 ] 00:26:12.948 }' 00:26:12.948 11:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:12.948 11:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:13.513 11:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.513 11:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:13.772 11:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:13.772 11:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:14.031 [2024-07-25 11:08:21.064870] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:14.031 BaseBdev1 00:26:14.031 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:14.031 11:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:26:14.031 11:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:14.031 11:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:14.031 11:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:14.031 11:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:14.031 11:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:14.289 11:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:14.549 [ 00:26:14.549 { 00:26:14.549 "name": "BaseBdev1", 00:26:14.549 "aliases": [ 00:26:14.549 "1ece7026-1fe9-4ea9-b55c-31055f2c6b17" 00:26:14.549 ], 00:26:14.549 "product_name": "Malloc disk", 00:26:14.549 "block_size": 512, 00:26:14.549 "num_blocks": 65536, 00:26:14.549 "uuid": "1ece7026-1fe9-4ea9-b55c-31055f2c6b17", 00:26:14.549 "assigned_rate_limits": { 00:26:14.549 "rw_ios_per_sec": 0, 00:26:14.549 "rw_mbytes_per_sec": 0, 00:26:14.549 "r_mbytes_per_sec": 0, 00:26:14.549 "w_mbytes_per_sec": 0 00:26:14.549 }, 00:26:14.549 "claimed": true, 00:26:14.549 "claim_type": "exclusive_write", 00:26:14.549 "zoned": false, 00:26:14.549 "supported_io_types": { 00:26:14.549 "read": true, 00:26:14.549 "write": true, 00:26:14.549 "unmap": true, 00:26:14.549 "flush": true, 00:26:14.549 "reset": true, 00:26:14.549 "nvme_admin": false, 00:26:14.549 "nvme_io": false, 00:26:14.549 "nvme_io_md": false, 00:26:14.549 "write_zeroes": true, 00:26:14.549 "zcopy": true, 00:26:14.549 "get_zone_info": false, 00:26:14.549 "zone_management": false, 00:26:14.549 "zone_append": false, 00:26:14.549 "compare": false, 00:26:14.549 "compare_and_write": false, 00:26:14.549 "abort": true, 00:26:14.549 "seek_hole": false, 00:26:14.549 "seek_data": false, 00:26:14.549 "copy": true, 00:26:14.549 "nvme_iov_md": false 00:26:14.549 }, 00:26:14.549 "memory_domains": [ 00:26:14.549 { 00:26:14.549 "dma_device_id": "system", 00:26:14.549 "dma_device_type": 1 00:26:14.549 }, 00:26:14.549 { 00:26:14.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:14.549 "dma_device_type": 2 00:26:14.549 } 00:26:14.549 ], 00:26:14.549 "driver_specific": {} 00:26:14.549 } 00:26:14.549 ] 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:14.549 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:14.807 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.807 "name": "Existed_Raid", 00:26:14.807 "uuid": "eaf624aa-3752-4be7-8825-37a30ed8dfbc", 00:26:14.807 "strip_size_kb": 0, 00:26:14.807 "state": "configuring", 00:26:14.807 "raid_level": "raid1", 00:26:14.807 "superblock": true, 00:26:14.807 "num_base_bdevs": 4, 00:26:14.807 "num_base_bdevs_discovered": 3, 00:26:14.807 "num_base_bdevs_operational": 4, 00:26:14.807 "base_bdevs_list": [ 00:26:14.807 { 00:26:14.807 "name": "BaseBdev1", 00:26:14.808 "uuid": "1ece7026-1fe9-4ea9-b55c-31055f2c6b17", 00:26:14.808 "is_configured": true, 00:26:14.808 "data_offset": 2048, 00:26:14.808 "data_size": 63488 00:26:14.808 }, 00:26:14.808 { 00:26:14.808 "name": null, 00:26:14.808 "uuid": "cd916c4f-84a3-4c97-b049-1451c9887fa6", 00:26:14.808 "is_configured": false, 00:26:14.808 "data_offset": 2048, 00:26:14.808 "data_size": 63488 00:26:14.808 }, 00:26:14.808 { 00:26:14.808 "name": "BaseBdev3", 00:26:14.808 "uuid": "62d3a1f0-c89c-4a54-9be3-7920788a7b4b", 00:26:14.808 "is_configured": true, 00:26:14.808 "data_offset": 2048, 00:26:14.808 "data_size": 63488 00:26:14.808 }, 00:26:14.808 { 00:26:14.808 "name": "BaseBdev4", 00:26:14.808 "uuid": "8a9e20eb-bd2c-4f6a-ad72-966debf309a2", 00:26:14.808 "is_configured": true, 00:26:14.808 "data_offset": 2048, 00:26:14.808 "data_size": 63488 00:26:14.808 } 00:26:14.808 ] 00:26:14.808 }' 00:26:14.808 11:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.808 11:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.472 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.472 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:15.472 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:15.473 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:15.731 [2024-07-25 11:08:22.769559] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:15.731 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:15.731 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:15.731 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:15.731 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:15.731 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:15.731 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:15.731 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.731 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.731 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.731 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.731 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.731 11:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.991 11:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.991 "name": "Existed_Raid", 00:26:15.991 "uuid": "eaf624aa-3752-4be7-8825-37a30ed8dfbc", 00:26:15.991 "strip_size_kb": 0, 00:26:15.991 "state": "configuring", 00:26:15.991 "raid_level": "raid1", 00:26:15.991 "superblock": true, 00:26:15.991 "num_base_bdevs": 4, 00:26:15.991 "num_base_bdevs_discovered": 2, 00:26:15.991 "num_base_bdevs_operational": 4, 00:26:15.991 "base_bdevs_list": [ 00:26:15.991 { 00:26:15.991 "name": "BaseBdev1", 00:26:15.991 "uuid": "1ece7026-1fe9-4ea9-b55c-31055f2c6b17", 00:26:15.991 "is_configured": true, 00:26:15.991 "data_offset": 2048, 00:26:15.991 "data_size": 63488 00:26:15.991 }, 00:26:15.991 { 00:26:15.991 "name": null, 00:26:15.991 "uuid": "cd916c4f-84a3-4c97-b049-1451c9887fa6", 00:26:15.991 "is_configured": false, 00:26:15.991 "data_offset": 2048, 00:26:15.991 "data_size": 63488 00:26:15.991 }, 00:26:15.991 { 00:26:15.991 "name": null, 00:26:15.991 "uuid": "62d3a1f0-c89c-4a54-9be3-7920788a7b4b", 00:26:15.991 "is_configured": false, 00:26:15.991 "data_offset": 2048, 00:26:15.991 "data_size": 63488 00:26:15.991 }, 00:26:15.991 { 00:26:15.991 "name": "BaseBdev4", 00:26:15.991 "uuid": "8a9e20eb-bd2c-4f6a-ad72-966debf309a2", 00:26:15.991 "is_configured": true, 00:26:15.991 "data_offset": 2048, 00:26:15.991 "data_size": 63488 00:26:15.991 } 00:26:15.991 ] 00:26:15.991 }' 00:26:15.991 11:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.991 11:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.558 11:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:16.558 11:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:16.816 11:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:16.816 11:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:17.075 [2024-07-25 11:08:24.036973] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:17.075 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:17.075 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:17.075 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:17.075 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:17.075 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:17.075 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:17.075 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:17.075 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:17.075 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:17.075 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:17.075 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.075 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.333 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:17.333 "name": "Existed_Raid", 00:26:17.333 "uuid": "eaf624aa-3752-4be7-8825-37a30ed8dfbc", 00:26:17.333 "strip_size_kb": 0, 00:26:17.333 "state": "configuring", 00:26:17.333 "raid_level": "raid1", 00:26:17.333 "superblock": true, 00:26:17.333 "num_base_bdevs": 4, 00:26:17.333 "num_base_bdevs_discovered": 3, 00:26:17.333 "num_base_bdevs_operational": 4, 00:26:17.333 "base_bdevs_list": [ 00:26:17.333 { 00:26:17.333 "name": "BaseBdev1", 00:26:17.333 "uuid": "1ece7026-1fe9-4ea9-b55c-31055f2c6b17", 00:26:17.333 "is_configured": true, 00:26:17.333 "data_offset": 2048, 00:26:17.333 "data_size": 63488 00:26:17.333 }, 00:26:17.333 { 00:26:17.333 "name": null, 00:26:17.333 "uuid": "cd916c4f-84a3-4c97-b049-1451c9887fa6", 00:26:17.333 "is_configured": false, 00:26:17.333 "data_offset": 2048, 00:26:17.333 "data_size": 63488 00:26:17.333 }, 00:26:17.333 { 00:26:17.333 "name": "BaseBdev3", 00:26:17.334 "uuid": "62d3a1f0-c89c-4a54-9be3-7920788a7b4b", 00:26:17.334 "is_configured": true, 00:26:17.334 "data_offset": 2048, 00:26:17.334 "data_size": 63488 00:26:17.334 }, 00:26:17.334 { 00:26:17.334 "name": "BaseBdev4", 00:26:17.334 "uuid": "8a9e20eb-bd2c-4f6a-ad72-966debf309a2", 00:26:17.334 "is_configured": true, 00:26:17.334 "data_offset": 2048, 00:26:17.334 "data_size": 63488 00:26:17.334 } 00:26:17.334 ] 00:26:17.334 }' 00:26:17.334 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:17.334 11:08:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.899 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:17.899 11:08:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:18.158 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:18.158 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:18.158 [2024-07-25 11:08:25.228245] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:18.417 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:18.417 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:18.417 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:18.417 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:18.417 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:18.417 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:18.417 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:18.417 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:18.417 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:18.417 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:18.417 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:18.417 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:18.676 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:18.676 "name": "Existed_Raid", 00:26:18.676 "uuid": "eaf624aa-3752-4be7-8825-37a30ed8dfbc", 00:26:18.676 "strip_size_kb": 0, 00:26:18.676 "state": "configuring", 00:26:18.676 "raid_level": "raid1", 00:26:18.676 "superblock": true, 00:26:18.676 "num_base_bdevs": 4, 00:26:18.676 "num_base_bdevs_discovered": 2, 00:26:18.676 "num_base_bdevs_operational": 4, 00:26:18.676 "base_bdevs_list": [ 00:26:18.676 { 00:26:18.676 "name": null, 00:26:18.676 "uuid": "1ece7026-1fe9-4ea9-b55c-31055f2c6b17", 00:26:18.676 "is_configured": false, 00:26:18.676 "data_offset": 2048, 00:26:18.676 "data_size": 63488 00:26:18.676 }, 00:26:18.676 { 00:26:18.676 "name": null, 00:26:18.676 "uuid": "cd916c4f-84a3-4c97-b049-1451c9887fa6", 00:26:18.676 "is_configured": false, 00:26:18.676 "data_offset": 2048, 00:26:18.676 "data_size": 63488 00:26:18.676 }, 00:26:18.676 { 00:26:18.676 "name": "BaseBdev3", 00:26:18.676 "uuid": "62d3a1f0-c89c-4a54-9be3-7920788a7b4b", 00:26:18.676 "is_configured": true, 00:26:18.676 "data_offset": 2048, 00:26:18.676 "data_size": 63488 00:26:18.676 }, 00:26:18.676 { 00:26:18.676 "name": "BaseBdev4", 00:26:18.676 "uuid": "8a9e20eb-bd2c-4f6a-ad72-966debf309a2", 00:26:18.676 "is_configured": true, 00:26:18.676 "data_offset": 2048, 00:26:18.676 "data_size": 63488 00:26:18.676 } 00:26:18.676 ] 00:26:18.676 }' 00:26:18.676 11:08:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:18.676 11:08:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.243 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.243 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:19.502 [2024-07-25 11:08:26.572394] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.502 11:08:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:20.070 11:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:20.070 "name": "Existed_Raid", 00:26:20.070 "uuid": "eaf624aa-3752-4be7-8825-37a30ed8dfbc", 00:26:20.070 "strip_size_kb": 0, 00:26:20.070 "state": "configuring", 00:26:20.070 "raid_level": "raid1", 00:26:20.070 "superblock": true, 00:26:20.070 "num_base_bdevs": 4, 00:26:20.070 "num_base_bdevs_discovered": 3, 00:26:20.070 "num_base_bdevs_operational": 4, 00:26:20.070 "base_bdevs_list": [ 00:26:20.070 { 00:26:20.070 "name": null, 00:26:20.070 "uuid": "1ece7026-1fe9-4ea9-b55c-31055f2c6b17", 00:26:20.070 "is_configured": false, 00:26:20.070 "data_offset": 2048, 00:26:20.070 "data_size": 63488 00:26:20.070 }, 00:26:20.070 { 00:26:20.070 "name": "BaseBdev2", 00:26:20.070 "uuid": "cd916c4f-84a3-4c97-b049-1451c9887fa6", 00:26:20.070 "is_configured": true, 00:26:20.070 "data_offset": 2048, 00:26:20.070 "data_size": 63488 00:26:20.070 }, 00:26:20.070 { 00:26:20.070 "name": "BaseBdev3", 00:26:20.070 "uuid": "62d3a1f0-c89c-4a54-9be3-7920788a7b4b", 00:26:20.070 "is_configured": true, 00:26:20.070 "data_offset": 2048, 00:26:20.070 "data_size": 63488 00:26:20.070 }, 00:26:20.070 { 00:26:20.070 "name": "BaseBdev4", 00:26:20.070 "uuid": "8a9e20eb-bd2c-4f6a-ad72-966debf309a2", 00:26:20.070 "is_configured": true, 00:26:20.070 "data_offset": 2048, 00:26:20.070 "data_size": 63488 00:26:20.070 } 00:26:20.070 ] 00:26:20.070 }' 00:26:20.070 11:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:20.070 11:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.638 11:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.638 11:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:20.896 11:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:20.897 11:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:20.897 11:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.155 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1ece7026-1fe9-4ea9-b55c-31055f2c6b17 00:26:21.414 [2024-07-25 11:08:28.384531] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:21.414 [2024-07-25 11:08:28.384788] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:21.414 [2024-07-25 11:08:28.384815] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:21.414 [2024-07-25 11:08:28.385121] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:26:21.414 [2024-07-25 11:08:28.385362] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:21.414 [2024-07-25 11:08:28.385377] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:26:21.414 NewBaseBdev 00:26:21.414 [2024-07-25 11:08:28.385570] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:21.414 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:21.414 11:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:26:21.414 11:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:21.414 11:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:26:21.414 11:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:21.414 11:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:21.414 11:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:21.673 11:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:21.931 [ 00:26:21.931 { 00:26:21.931 "name": "NewBaseBdev", 00:26:21.931 "aliases": [ 00:26:21.931 "1ece7026-1fe9-4ea9-b55c-31055f2c6b17" 00:26:21.931 ], 00:26:21.931 "product_name": "Malloc disk", 00:26:21.931 "block_size": 512, 00:26:21.931 "num_blocks": 65536, 00:26:21.931 "uuid": "1ece7026-1fe9-4ea9-b55c-31055f2c6b17", 00:26:21.931 "assigned_rate_limits": { 00:26:21.931 "rw_ios_per_sec": 0, 00:26:21.931 "rw_mbytes_per_sec": 0, 00:26:21.931 "r_mbytes_per_sec": 0, 00:26:21.931 "w_mbytes_per_sec": 0 00:26:21.931 }, 00:26:21.931 "claimed": true, 00:26:21.931 "claim_type": "exclusive_write", 00:26:21.931 "zoned": false, 00:26:21.931 "supported_io_types": { 00:26:21.931 "read": true, 00:26:21.931 "write": true, 00:26:21.931 "unmap": true, 00:26:21.931 "flush": true, 00:26:21.931 "reset": true, 00:26:21.931 "nvme_admin": false, 00:26:21.931 "nvme_io": false, 00:26:21.931 "nvme_io_md": false, 00:26:21.931 "write_zeroes": true, 00:26:21.931 "zcopy": true, 00:26:21.931 "get_zone_info": false, 00:26:21.931 "zone_management": false, 00:26:21.931 "zone_append": false, 00:26:21.931 "compare": false, 00:26:21.931 "compare_and_write": false, 00:26:21.931 "abort": true, 00:26:21.931 "seek_hole": false, 00:26:21.931 "seek_data": false, 00:26:21.931 "copy": true, 00:26:21.931 "nvme_iov_md": false 00:26:21.931 }, 00:26:21.931 "memory_domains": [ 00:26:21.931 { 00:26:21.931 "dma_device_id": "system", 00:26:21.931 "dma_device_type": 1 00:26:21.931 }, 00:26:21.931 { 00:26:21.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.931 "dma_device_type": 2 00:26:21.931 } 00:26:21.931 ], 00:26:21.931 "driver_specific": {} 00:26:21.931 } 00:26:21.931 ] 00:26:21.931 11:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:26:21.931 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:21.931 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:21.931 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:21.931 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:21.931 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:21.931 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:21.931 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:21.932 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:21.932 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:21.932 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:21.932 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:21.932 11:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.932 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:21.932 "name": "Existed_Raid", 00:26:21.932 "uuid": "eaf624aa-3752-4be7-8825-37a30ed8dfbc", 00:26:21.932 "strip_size_kb": 0, 00:26:21.932 "state": "online", 00:26:21.932 "raid_level": "raid1", 00:26:21.932 "superblock": true, 00:26:21.932 "num_base_bdevs": 4, 00:26:21.932 "num_base_bdevs_discovered": 4, 00:26:21.932 "num_base_bdevs_operational": 4, 00:26:21.932 "base_bdevs_list": [ 00:26:21.932 { 00:26:21.932 "name": "NewBaseBdev", 00:26:21.932 "uuid": "1ece7026-1fe9-4ea9-b55c-31055f2c6b17", 00:26:21.932 "is_configured": true, 00:26:21.932 "data_offset": 2048, 00:26:21.932 "data_size": 63488 00:26:21.932 }, 00:26:21.932 { 00:26:21.932 "name": "BaseBdev2", 00:26:21.932 "uuid": "cd916c4f-84a3-4c97-b049-1451c9887fa6", 00:26:21.932 "is_configured": true, 00:26:21.932 "data_offset": 2048, 00:26:21.932 "data_size": 63488 00:26:21.932 }, 00:26:21.932 { 00:26:21.932 "name": "BaseBdev3", 00:26:21.932 "uuid": "62d3a1f0-c89c-4a54-9be3-7920788a7b4b", 00:26:21.932 "is_configured": true, 00:26:21.932 "data_offset": 2048, 00:26:21.932 "data_size": 63488 00:26:21.932 }, 00:26:21.932 { 00:26:21.932 "name": "BaseBdev4", 00:26:21.932 "uuid": "8a9e20eb-bd2c-4f6a-ad72-966debf309a2", 00:26:21.932 "is_configured": true, 00:26:21.932 "data_offset": 2048, 00:26:21.932 "data_size": 63488 00:26:21.932 } 00:26:21.932 ] 00:26:21.932 }' 00:26:21.932 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:21.932 11:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:22.499 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:22.499 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:22.499 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:22.499 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:22.499 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:22.499 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:22.499 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:22.499 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:22.758 [2024-07-25 11:08:29.772769] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:22.758 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:22.758 "name": "Existed_Raid", 00:26:22.758 "aliases": [ 00:26:22.758 "eaf624aa-3752-4be7-8825-37a30ed8dfbc" 00:26:22.758 ], 00:26:22.758 "product_name": "Raid Volume", 00:26:22.758 "block_size": 512, 00:26:22.758 "num_blocks": 63488, 00:26:22.758 "uuid": "eaf624aa-3752-4be7-8825-37a30ed8dfbc", 00:26:22.758 "assigned_rate_limits": { 00:26:22.758 "rw_ios_per_sec": 0, 00:26:22.758 "rw_mbytes_per_sec": 0, 00:26:22.758 "r_mbytes_per_sec": 0, 00:26:22.758 "w_mbytes_per_sec": 0 00:26:22.758 }, 00:26:22.758 "claimed": false, 00:26:22.758 "zoned": false, 00:26:22.758 "supported_io_types": { 00:26:22.758 "read": true, 00:26:22.758 "write": true, 00:26:22.758 "unmap": false, 00:26:22.758 "flush": false, 00:26:22.758 "reset": true, 00:26:22.758 "nvme_admin": false, 00:26:22.758 "nvme_io": false, 00:26:22.758 "nvme_io_md": false, 00:26:22.758 "write_zeroes": true, 00:26:22.758 "zcopy": false, 00:26:22.758 "get_zone_info": false, 00:26:22.758 "zone_management": false, 00:26:22.758 "zone_append": false, 00:26:22.758 "compare": false, 00:26:22.758 "compare_and_write": false, 00:26:22.758 "abort": false, 00:26:22.758 "seek_hole": false, 00:26:22.758 "seek_data": false, 00:26:22.758 "copy": false, 00:26:22.758 "nvme_iov_md": false 00:26:22.758 }, 00:26:22.758 "memory_domains": [ 00:26:22.758 { 00:26:22.758 "dma_device_id": "system", 00:26:22.758 "dma_device_type": 1 00:26:22.758 }, 00:26:22.758 { 00:26:22.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.758 "dma_device_type": 2 00:26:22.758 }, 00:26:22.758 { 00:26:22.758 "dma_device_id": "system", 00:26:22.758 "dma_device_type": 1 00:26:22.758 }, 00:26:22.758 { 00:26:22.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.758 "dma_device_type": 2 00:26:22.758 }, 00:26:22.758 { 00:26:22.758 "dma_device_id": "system", 00:26:22.758 "dma_device_type": 1 00:26:22.758 }, 00:26:22.758 { 00:26:22.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.758 "dma_device_type": 2 00:26:22.758 }, 00:26:22.758 { 00:26:22.758 "dma_device_id": "system", 00:26:22.758 "dma_device_type": 1 00:26:22.758 }, 00:26:22.758 { 00:26:22.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:22.758 "dma_device_type": 2 00:26:22.758 } 00:26:22.758 ], 00:26:22.758 "driver_specific": { 00:26:22.758 "raid": { 00:26:22.758 "uuid": "eaf624aa-3752-4be7-8825-37a30ed8dfbc", 00:26:22.758 "strip_size_kb": 0, 00:26:22.758 "state": "online", 00:26:22.758 "raid_level": "raid1", 00:26:22.758 "superblock": true, 00:26:22.758 "num_base_bdevs": 4, 00:26:22.758 "num_base_bdevs_discovered": 4, 00:26:22.758 "num_base_bdevs_operational": 4, 00:26:22.758 "base_bdevs_list": [ 00:26:22.758 { 00:26:22.758 "name": "NewBaseBdev", 00:26:22.758 "uuid": "1ece7026-1fe9-4ea9-b55c-31055f2c6b17", 00:26:22.758 "is_configured": true, 00:26:22.758 "data_offset": 2048, 00:26:22.758 "data_size": 63488 00:26:22.758 }, 00:26:22.758 { 00:26:22.758 "name": "BaseBdev2", 00:26:22.758 "uuid": "cd916c4f-84a3-4c97-b049-1451c9887fa6", 00:26:22.758 "is_configured": true, 00:26:22.758 "data_offset": 2048, 00:26:22.758 "data_size": 63488 00:26:22.758 }, 00:26:22.758 { 00:26:22.758 "name": "BaseBdev3", 00:26:22.758 "uuid": "62d3a1f0-c89c-4a54-9be3-7920788a7b4b", 00:26:22.758 "is_configured": true, 00:26:22.758 "data_offset": 2048, 00:26:22.758 "data_size": 63488 00:26:22.758 }, 00:26:22.758 { 00:26:22.758 "name": "BaseBdev4", 00:26:22.758 "uuid": "8a9e20eb-bd2c-4f6a-ad72-966debf309a2", 00:26:22.758 "is_configured": true, 00:26:22.758 "data_offset": 2048, 00:26:22.758 "data_size": 63488 00:26:22.758 } 00:26:22.758 ] 00:26:22.758 } 00:26:22.758 } 00:26:22.758 }' 00:26:22.758 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:22.758 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:22.758 BaseBdev2 00:26:22.758 BaseBdev3 00:26:22.758 BaseBdev4' 00:26:22.758 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:22.758 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:22.758 11:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:23.017 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:23.017 "name": "NewBaseBdev", 00:26:23.017 "aliases": [ 00:26:23.017 "1ece7026-1fe9-4ea9-b55c-31055f2c6b17" 00:26:23.017 ], 00:26:23.017 "product_name": "Malloc disk", 00:26:23.017 "block_size": 512, 00:26:23.017 "num_blocks": 65536, 00:26:23.017 "uuid": "1ece7026-1fe9-4ea9-b55c-31055f2c6b17", 00:26:23.017 "assigned_rate_limits": { 00:26:23.017 "rw_ios_per_sec": 0, 00:26:23.017 "rw_mbytes_per_sec": 0, 00:26:23.017 "r_mbytes_per_sec": 0, 00:26:23.017 "w_mbytes_per_sec": 0 00:26:23.017 }, 00:26:23.017 "claimed": true, 00:26:23.017 "claim_type": "exclusive_write", 00:26:23.017 "zoned": false, 00:26:23.017 "supported_io_types": { 00:26:23.017 "read": true, 00:26:23.017 "write": true, 00:26:23.017 "unmap": true, 00:26:23.017 "flush": true, 00:26:23.017 "reset": true, 00:26:23.017 "nvme_admin": false, 00:26:23.017 "nvme_io": false, 00:26:23.017 "nvme_io_md": false, 00:26:23.017 "write_zeroes": true, 00:26:23.017 "zcopy": true, 00:26:23.017 "get_zone_info": false, 00:26:23.017 "zone_management": false, 00:26:23.017 "zone_append": false, 00:26:23.017 "compare": false, 00:26:23.017 "compare_and_write": false, 00:26:23.017 "abort": true, 00:26:23.017 "seek_hole": false, 00:26:23.017 "seek_data": false, 00:26:23.017 "copy": true, 00:26:23.017 "nvme_iov_md": false 00:26:23.017 }, 00:26:23.017 "memory_domains": [ 00:26:23.017 { 00:26:23.017 "dma_device_id": "system", 00:26:23.017 "dma_device_type": 1 00:26:23.017 }, 00:26:23.017 { 00:26:23.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.017 "dma_device_type": 2 00:26:23.017 } 00:26:23.017 ], 00:26:23.017 "driver_specific": {} 00:26:23.017 }' 00:26:23.017 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:23.017 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:23.276 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:23.276 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:23.276 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:23.276 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:23.276 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:23.276 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:23.276 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:23.276 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:23.276 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:23.534 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:23.534 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:23.534 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:23.534 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:23.534 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:23.535 "name": "BaseBdev2", 00:26:23.535 "aliases": [ 00:26:23.535 "cd916c4f-84a3-4c97-b049-1451c9887fa6" 00:26:23.535 ], 00:26:23.535 "product_name": "Malloc disk", 00:26:23.535 "block_size": 512, 00:26:23.535 "num_blocks": 65536, 00:26:23.535 "uuid": "cd916c4f-84a3-4c97-b049-1451c9887fa6", 00:26:23.535 "assigned_rate_limits": { 00:26:23.535 "rw_ios_per_sec": 0, 00:26:23.535 "rw_mbytes_per_sec": 0, 00:26:23.535 "r_mbytes_per_sec": 0, 00:26:23.535 "w_mbytes_per_sec": 0 00:26:23.535 }, 00:26:23.535 "claimed": true, 00:26:23.535 "claim_type": "exclusive_write", 00:26:23.535 "zoned": false, 00:26:23.535 "supported_io_types": { 00:26:23.535 "read": true, 00:26:23.535 "write": true, 00:26:23.535 "unmap": true, 00:26:23.535 "flush": true, 00:26:23.535 "reset": true, 00:26:23.535 "nvme_admin": false, 00:26:23.535 "nvme_io": false, 00:26:23.535 "nvme_io_md": false, 00:26:23.535 "write_zeroes": true, 00:26:23.535 "zcopy": true, 00:26:23.535 "get_zone_info": false, 00:26:23.535 "zone_management": false, 00:26:23.535 "zone_append": false, 00:26:23.535 "compare": false, 00:26:23.535 "compare_and_write": false, 00:26:23.535 "abort": true, 00:26:23.535 "seek_hole": false, 00:26:23.535 "seek_data": false, 00:26:23.535 "copy": true, 00:26:23.535 "nvme_iov_md": false 00:26:23.535 }, 00:26:23.535 "memory_domains": [ 00:26:23.535 { 00:26:23.535 "dma_device_id": "system", 00:26:23.535 "dma_device_type": 1 00:26:23.535 }, 00:26:23.535 { 00:26:23.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.535 "dma_device_type": 2 00:26:23.535 } 00:26:23.535 ], 00:26:23.535 "driver_specific": {} 00:26:23.535 }' 00:26:23.535 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:23.794 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:23.794 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:23.794 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:23.794 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:23.794 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:23.794 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:23.794 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:23.794 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:23.794 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.053 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.053 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:24.053 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:24.053 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:24.053 11:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:24.312 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:24.312 "name": "BaseBdev3", 00:26:24.312 "aliases": [ 00:26:24.312 "62d3a1f0-c89c-4a54-9be3-7920788a7b4b" 00:26:24.312 ], 00:26:24.312 "product_name": "Malloc disk", 00:26:24.312 "block_size": 512, 00:26:24.312 "num_blocks": 65536, 00:26:24.312 "uuid": "62d3a1f0-c89c-4a54-9be3-7920788a7b4b", 00:26:24.312 "assigned_rate_limits": { 00:26:24.312 "rw_ios_per_sec": 0, 00:26:24.312 "rw_mbytes_per_sec": 0, 00:26:24.312 "r_mbytes_per_sec": 0, 00:26:24.312 "w_mbytes_per_sec": 0 00:26:24.312 }, 00:26:24.312 "claimed": true, 00:26:24.312 "claim_type": "exclusive_write", 00:26:24.312 "zoned": false, 00:26:24.312 "supported_io_types": { 00:26:24.312 "read": true, 00:26:24.312 "write": true, 00:26:24.312 "unmap": true, 00:26:24.312 "flush": true, 00:26:24.312 "reset": true, 00:26:24.312 "nvme_admin": false, 00:26:24.312 "nvme_io": false, 00:26:24.312 "nvme_io_md": false, 00:26:24.312 "write_zeroes": true, 00:26:24.312 "zcopy": true, 00:26:24.312 "get_zone_info": false, 00:26:24.312 "zone_management": false, 00:26:24.312 "zone_append": false, 00:26:24.312 "compare": false, 00:26:24.312 "compare_and_write": false, 00:26:24.312 "abort": true, 00:26:24.312 "seek_hole": false, 00:26:24.312 "seek_data": false, 00:26:24.312 "copy": true, 00:26:24.312 "nvme_iov_md": false 00:26:24.312 }, 00:26:24.312 "memory_domains": [ 00:26:24.312 { 00:26:24.312 "dma_device_id": "system", 00:26:24.312 "dma_device_type": 1 00:26:24.312 }, 00:26:24.312 { 00:26:24.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.312 "dma_device_type": 2 00:26:24.312 } 00:26:24.312 ], 00:26:24.312 "driver_specific": {} 00:26:24.312 }' 00:26:24.312 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:24.312 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:24.312 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:24.312 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:24.312 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:24.312 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:24.312 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:24.312 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:24.312 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:24.312 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.571 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.571 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:24.571 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:24.571 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:24.571 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:24.830 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:24.830 "name": "BaseBdev4", 00:26:24.830 "aliases": [ 00:26:24.830 "8a9e20eb-bd2c-4f6a-ad72-966debf309a2" 00:26:24.830 ], 00:26:24.830 "product_name": "Malloc disk", 00:26:24.830 "block_size": 512, 00:26:24.830 "num_blocks": 65536, 00:26:24.830 "uuid": "8a9e20eb-bd2c-4f6a-ad72-966debf309a2", 00:26:24.830 "assigned_rate_limits": { 00:26:24.830 "rw_ios_per_sec": 0, 00:26:24.830 "rw_mbytes_per_sec": 0, 00:26:24.830 "r_mbytes_per_sec": 0, 00:26:24.830 "w_mbytes_per_sec": 0 00:26:24.830 }, 00:26:24.830 "claimed": true, 00:26:24.830 "claim_type": "exclusive_write", 00:26:24.830 "zoned": false, 00:26:24.830 "supported_io_types": { 00:26:24.830 "read": true, 00:26:24.830 "write": true, 00:26:24.830 "unmap": true, 00:26:24.830 "flush": true, 00:26:24.830 "reset": true, 00:26:24.830 "nvme_admin": false, 00:26:24.830 "nvme_io": false, 00:26:24.830 "nvme_io_md": false, 00:26:24.830 "write_zeroes": true, 00:26:24.830 "zcopy": true, 00:26:24.830 "get_zone_info": false, 00:26:24.830 "zone_management": false, 00:26:24.830 "zone_append": false, 00:26:24.830 "compare": false, 00:26:24.830 "compare_and_write": false, 00:26:24.830 "abort": true, 00:26:24.830 "seek_hole": false, 00:26:24.830 "seek_data": false, 00:26:24.830 "copy": true, 00:26:24.830 "nvme_iov_md": false 00:26:24.830 }, 00:26:24.830 "memory_domains": [ 00:26:24.830 { 00:26:24.830 "dma_device_id": "system", 00:26:24.830 "dma_device_type": 1 00:26:24.830 }, 00:26:24.830 { 00:26:24.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.830 "dma_device_type": 2 00:26:24.830 } 00:26:24.830 ], 00:26:24.830 "driver_specific": {} 00:26:24.830 }' 00:26:24.830 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:24.830 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:24.830 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:24.830 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:24.830 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:24.830 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:24.830 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:24.830 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:24.830 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:24.830 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:24.830 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:25.089 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:25.089 11:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:25.089 [2024-07-25 11:08:32.195037] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:25.089 [2024-07-25 11:08:32.195068] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:25.089 [2024-07-25 11:08:32.195156] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:25.089 [2024-07-25 11:08:32.195496] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:25.089 [2024-07-25 11:08:32.195517] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:26:25.348 11:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 3680138 00:26:25.349 11:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3680138 ']' 00:26:25.349 11:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 3680138 00:26:25.349 11:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:26:25.349 11:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:25.349 11:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3680138 00:26:25.349 11:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:25.349 11:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:25.349 11:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3680138' 00:26:25.349 killing process with pid 3680138 00:26:25.349 11:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 3680138 00:26:25.349 [2024-07-25 11:08:32.274581] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:25.349 11:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 3680138 00:26:25.917 [2024-07-25 11:08:32.728829] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:27.822 11:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:26:27.822 00:26:27.822 real 0m33.475s 00:26:27.822 user 0m58.633s 00:26:27.822 sys 0m5.710s 00:26:27.822 11:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:27.822 11:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.822 ************************************ 00:26:27.822 END TEST raid_state_function_test_sb 00:26:27.822 ************************************ 00:26:27.822 11:08:34 bdev_raid -- bdev/bdev_raid.sh@949 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:26:27.822 11:08:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:27.822 11:08:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:27.822 11:08:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:27.822 ************************************ 00:26:27.822 START TEST raid_superblock_test 00:26:27.822 ************************************ 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=4 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@414 -- # local strip_size 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@427 -- # raid_pid=3686342 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@428 -- # waitforlisten 3686342 /var/tmp/spdk-raid.sock 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 3686342 ']' 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:27.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:27.822 11:08:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.822 [2024-07-25 11:08:34.778857] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:27.822 [2024-07-25 11:08:34.779108] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3686342 ] 00:26:28.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.081 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:28.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.081 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:28.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.081 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:28.081 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:28.082 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:28.082 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:28.082 [2024-07-25 11:08:35.149719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.378 [2024-07-25 11:08:35.418949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.649 [2024-07-25 11:08:35.721025] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:28.649 [2024-07-25 11:08:35.721058] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:28.909 11:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:28.909 11:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:26:28.909 11:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:26:28.909 11:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:28.909 11:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:26:28.909 11:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:26:28.909 11:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:28.909 11:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:28.909 11:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:28.909 11:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:28.909 11:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:29.168 malloc1 00:26:29.168 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:29.427 [2024-07-25 11:08:36.369402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:29.427 [2024-07-25 11:08:36.369463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.427 [2024-07-25 11:08:36.369499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:26:29.427 [2024-07-25 11:08:36.369516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.427 [2024-07-25 11:08:36.372272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.427 [2024-07-25 11:08:36.372307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:29.427 pt1 00:26:29.427 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:29.427 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:29.427 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:26:29.427 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:26:29.427 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:29.427 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:29.427 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:29.427 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:29.427 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:29.686 malloc2 00:26:29.686 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:29.946 [2024-07-25 11:08:36.874078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:29.946 [2024-07-25 11:08:36.874136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:29.946 [2024-07-25 11:08:36.874170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:26:29.946 [2024-07-25 11:08:36.874185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:29.946 [2024-07-25 11:08:36.876932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:29.946 [2024-07-25 11:08:36.876973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:29.946 pt2 00:26:29.946 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:29.946 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:29.946 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc3 00:26:29.946 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt3 00:26:29.946 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:29.946 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:29.946 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:29.946 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:29.946 11:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:30.205 malloc3 00:26:30.205 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:30.464 [2024-07-25 11:08:37.371256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:30.464 [2024-07-25 11:08:37.371316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.464 [2024-07-25 11:08:37.371345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:26:30.464 [2024-07-25 11:08:37.371362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.464 [2024-07-25 11:08:37.374123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.464 [2024-07-25 11:08:37.374167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:30.464 pt3 00:26:30.464 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:30.464 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:30.464 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc4 00:26:30.464 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt4 00:26:30.464 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:30.464 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:30.464 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:26:30.464 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:30.464 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:26:30.723 malloc4 00:26:30.723 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:30.981 [2024-07-25 11:08:37.890755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:30.981 [2024-07-25 11:08:37.890819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.981 [2024-07-25 11:08:37.890850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:26:30.981 [2024-07-25 11:08:37.890865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.981 [2024-07-25 11:08:37.893643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.981 [2024-07-25 11:08:37.893677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:30.981 pt4 00:26:30.981 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:26:30.981 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:26:30.981 11:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:26:31.240 [2024-07-25 11:08:38.115449] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:31.240 [2024-07-25 11:08:38.117764] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:31.240 [2024-07-25 11:08:38.117856] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:31.240 [2024-07-25 11:08:38.117913] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:31.240 [2024-07-25 11:08:38.118148] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:31.240 [2024-07-25 11:08:38.118166] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:31.240 [2024-07-25 11:08:38.118516] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:26:31.240 [2024-07-25 11:08:38.118769] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:31.240 [2024-07-25 11:08:38.118787] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:31.240 [2024-07-25 11:08:38.118991] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:31.240 "name": "raid_bdev1", 00:26:31.240 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:31.240 "strip_size_kb": 0, 00:26:31.240 "state": "online", 00:26:31.240 "raid_level": "raid1", 00:26:31.240 "superblock": true, 00:26:31.240 "num_base_bdevs": 4, 00:26:31.240 "num_base_bdevs_discovered": 4, 00:26:31.240 "num_base_bdevs_operational": 4, 00:26:31.240 "base_bdevs_list": [ 00:26:31.240 { 00:26:31.240 "name": "pt1", 00:26:31.240 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:31.240 "is_configured": true, 00:26:31.240 "data_offset": 2048, 00:26:31.240 "data_size": 63488 00:26:31.240 }, 00:26:31.240 { 00:26:31.240 "name": "pt2", 00:26:31.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:31.240 "is_configured": true, 00:26:31.240 "data_offset": 2048, 00:26:31.240 "data_size": 63488 00:26:31.240 }, 00:26:31.240 { 00:26:31.240 "name": "pt3", 00:26:31.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:31.240 "is_configured": true, 00:26:31.240 "data_offset": 2048, 00:26:31.240 "data_size": 63488 00:26:31.240 }, 00:26:31.240 { 00:26:31.240 "name": "pt4", 00:26:31.240 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:31.240 "is_configured": true, 00:26:31.240 "data_offset": 2048, 00:26:31.240 "data_size": 63488 00:26:31.240 } 00:26:31.240 ] 00:26:31.240 }' 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:31.240 11:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.178 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:26:32.178 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:32.178 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:32.178 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:32.178 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:32.178 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:32.178 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:32.178 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:32.437 [2024-07-25 11:08:39.403284] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:32.437 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:32.437 "name": "raid_bdev1", 00:26:32.437 "aliases": [ 00:26:32.437 "5611a13b-0118-4f51-8afc-bc2d4e1622a2" 00:26:32.437 ], 00:26:32.437 "product_name": "Raid Volume", 00:26:32.437 "block_size": 512, 00:26:32.437 "num_blocks": 63488, 00:26:32.437 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:32.437 "assigned_rate_limits": { 00:26:32.437 "rw_ios_per_sec": 0, 00:26:32.437 "rw_mbytes_per_sec": 0, 00:26:32.437 "r_mbytes_per_sec": 0, 00:26:32.437 "w_mbytes_per_sec": 0 00:26:32.437 }, 00:26:32.437 "claimed": false, 00:26:32.437 "zoned": false, 00:26:32.437 "supported_io_types": { 00:26:32.437 "read": true, 00:26:32.437 "write": true, 00:26:32.437 "unmap": false, 00:26:32.437 "flush": false, 00:26:32.437 "reset": true, 00:26:32.437 "nvme_admin": false, 00:26:32.437 "nvme_io": false, 00:26:32.437 "nvme_io_md": false, 00:26:32.437 "write_zeroes": true, 00:26:32.437 "zcopy": false, 00:26:32.437 "get_zone_info": false, 00:26:32.437 "zone_management": false, 00:26:32.437 "zone_append": false, 00:26:32.437 "compare": false, 00:26:32.437 "compare_and_write": false, 00:26:32.437 "abort": false, 00:26:32.437 "seek_hole": false, 00:26:32.437 "seek_data": false, 00:26:32.437 "copy": false, 00:26:32.437 "nvme_iov_md": false 00:26:32.437 }, 00:26:32.437 "memory_domains": [ 00:26:32.437 { 00:26:32.437 "dma_device_id": "system", 00:26:32.437 "dma_device_type": 1 00:26:32.437 }, 00:26:32.437 { 00:26:32.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.437 "dma_device_type": 2 00:26:32.437 }, 00:26:32.437 { 00:26:32.437 "dma_device_id": "system", 00:26:32.437 "dma_device_type": 1 00:26:32.437 }, 00:26:32.437 { 00:26:32.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.437 "dma_device_type": 2 00:26:32.437 }, 00:26:32.437 { 00:26:32.437 "dma_device_id": "system", 00:26:32.437 "dma_device_type": 1 00:26:32.437 }, 00:26:32.437 { 00:26:32.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.437 "dma_device_type": 2 00:26:32.438 }, 00:26:32.438 { 00:26:32.438 "dma_device_id": "system", 00:26:32.438 "dma_device_type": 1 00:26:32.438 }, 00:26:32.438 { 00:26:32.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.438 "dma_device_type": 2 00:26:32.438 } 00:26:32.438 ], 00:26:32.438 "driver_specific": { 00:26:32.438 "raid": { 00:26:32.438 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:32.438 "strip_size_kb": 0, 00:26:32.438 "state": "online", 00:26:32.438 "raid_level": "raid1", 00:26:32.438 "superblock": true, 00:26:32.438 "num_base_bdevs": 4, 00:26:32.438 "num_base_bdevs_discovered": 4, 00:26:32.438 "num_base_bdevs_operational": 4, 00:26:32.438 "base_bdevs_list": [ 00:26:32.438 { 00:26:32.438 "name": "pt1", 00:26:32.438 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:32.438 "is_configured": true, 00:26:32.438 "data_offset": 2048, 00:26:32.438 "data_size": 63488 00:26:32.438 }, 00:26:32.438 { 00:26:32.438 "name": "pt2", 00:26:32.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:32.438 "is_configured": true, 00:26:32.438 "data_offset": 2048, 00:26:32.438 "data_size": 63488 00:26:32.438 }, 00:26:32.438 { 00:26:32.438 "name": "pt3", 00:26:32.438 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:32.438 "is_configured": true, 00:26:32.438 "data_offset": 2048, 00:26:32.438 "data_size": 63488 00:26:32.438 }, 00:26:32.438 { 00:26:32.438 "name": "pt4", 00:26:32.438 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:32.438 "is_configured": true, 00:26:32.438 "data_offset": 2048, 00:26:32.438 "data_size": 63488 00:26:32.438 } 00:26:32.438 ] 00:26:32.438 } 00:26:32.438 } 00:26:32.438 }' 00:26:32.438 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:32.438 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:32.438 pt2 00:26:32.438 pt3 00:26:32.438 pt4' 00:26:32.438 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:32.438 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:32.438 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:32.697 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:32.697 "name": "pt1", 00:26:32.697 "aliases": [ 00:26:32.697 "00000000-0000-0000-0000-000000000001" 00:26:32.697 ], 00:26:32.697 "product_name": "passthru", 00:26:32.697 "block_size": 512, 00:26:32.697 "num_blocks": 65536, 00:26:32.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:32.697 "assigned_rate_limits": { 00:26:32.697 "rw_ios_per_sec": 0, 00:26:32.698 "rw_mbytes_per_sec": 0, 00:26:32.698 "r_mbytes_per_sec": 0, 00:26:32.698 "w_mbytes_per_sec": 0 00:26:32.698 }, 00:26:32.698 "claimed": true, 00:26:32.698 "claim_type": "exclusive_write", 00:26:32.698 "zoned": false, 00:26:32.698 "supported_io_types": { 00:26:32.698 "read": true, 00:26:32.698 "write": true, 00:26:32.698 "unmap": true, 00:26:32.698 "flush": true, 00:26:32.698 "reset": true, 00:26:32.698 "nvme_admin": false, 00:26:32.698 "nvme_io": false, 00:26:32.698 "nvme_io_md": false, 00:26:32.698 "write_zeroes": true, 00:26:32.698 "zcopy": true, 00:26:32.698 "get_zone_info": false, 00:26:32.698 "zone_management": false, 00:26:32.698 "zone_append": false, 00:26:32.698 "compare": false, 00:26:32.698 "compare_and_write": false, 00:26:32.698 "abort": true, 00:26:32.698 "seek_hole": false, 00:26:32.698 "seek_data": false, 00:26:32.698 "copy": true, 00:26:32.698 "nvme_iov_md": false 00:26:32.698 }, 00:26:32.698 "memory_domains": [ 00:26:32.698 { 00:26:32.698 "dma_device_id": "system", 00:26:32.698 "dma_device_type": 1 00:26:32.698 }, 00:26:32.698 { 00:26:32.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:32.698 "dma_device_type": 2 00:26:32.698 } 00:26:32.698 ], 00:26:32.698 "driver_specific": { 00:26:32.698 "passthru": { 00:26:32.698 "name": "pt1", 00:26:32.698 "base_bdev_name": "malloc1" 00:26:32.698 } 00:26:32.698 } 00:26:32.698 }' 00:26:32.698 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:32.698 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:32.698 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:32.698 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:32.698 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:32.957 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:32.957 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:32.957 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:32.957 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:32.957 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:32.957 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:32.957 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:32.957 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:32.957 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:32.957 11:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:33.216 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:33.216 "name": "pt2", 00:26:33.216 "aliases": [ 00:26:33.216 "00000000-0000-0000-0000-000000000002" 00:26:33.216 ], 00:26:33.216 "product_name": "passthru", 00:26:33.216 "block_size": 512, 00:26:33.216 "num_blocks": 65536, 00:26:33.216 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:33.216 "assigned_rate_limits": { 00:26:33.216 "rw_ios_per_sec": 0, 00:26:33.216 "rw_mbytes_per_sec": 0, 00:26:33.216 "r_mbytes_per_sec": 0, 00:26:33.216 "w_mbytes_per_sec": 0 00:26:33.216 }, 00:26:33.216 "claimed": true, 00:26:33.216 "claim_type": "exclusive_write", 00:26:33.216 "zoned": false, 00:26:33.216 "supported_io_types": { 00:26:33.216 "read": true, 00:26:33.216 "write": true, 00:26:33.216 "unmap": true, 00:26:33.216 "flush": true, 00:26:33.216 "reset": true, 00:26:33.217 "nvme_admin": false, 00:26:33.217 "nvme_io": false, 00:26:33.217 "nvme_io_md": false, 00:26:33.217 "write_zeroes": true, 00:26:33.217 "zcopy": true, 00:26:33.217 "get_zone_info": false, 00:26:33.217 "zone_management": false, 00:26:33.217 "zone_append": false, 00:26:33.217 "compare": false, 00:26:33.217 "compare_and_write": false, 00:26:33.217 "abort": true, 00:26:33.217 "seek_hole": false, 00:26:33.217 "seek_data": false, 00:26:33.217 "copy": true, 00:26:33.217 "nvme_iov_md": false 00:26:33.217 }, 00:26:33.217 "memory_domains": [ 00:26:33.217 { 00:26:33.217 "dma_device_id": "system", 00:26:33.217 "dma_device_type": 1 00:26:33.217 }, 00:26:33.217 { 00:26:33.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.217 "dma_device_type": 2 00:26:33.217 } 00:26:33.217 ], 00:26:33.217 "driver_specific": { 00:26:33.217 "passthru": { 00:26:33.217 "name": "pt2", 00:26:33.217 "base_bdev_name": "malloc2" 00:26:33.217 } 00:26:33.217 } 00:26:33.217 }' 00:26:33.217 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.217 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.217 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:33.217 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:33.217 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:33.217 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:33.217 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:33.217 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:33.217 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:33.217 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.477 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.477 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:33.477 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:33.477 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:33.477 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:33.736 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:33.736 "name": "pt3", 00:26:33.736 "aliases": [ 00:26:33.736 "00000000-0000-0000-0000-000000000003" 00:26:33.736 ], 00:26:33.736 "product_name": "passthru", 00:26:33.736 "block_size": 512, 00:26:33.736 "num_blocks": 65536, 00:26:33.736 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:33.736 "assigned_rate_limits": { 00:26:33.736 "rw_ios_per_sec": 0, 00:26:33.736 "rw_mbytes_per_sec": 0, 00:26:33.736 "r_mbytes_per_sec": 0, 00:26:33.736 "w_mbytes_per_sec": 0 00:26:33.736 }, 00:26:33.736 "claimed": true, 00:26:33.736 "claim_type": "exclusive_write", 00:26:33.736 "zoned": false, 00:26:33.736 "supported_io_types": { 00:26:33.736 "read": true, 00:26:33.736 "write": true, 00:26:33.736 "unmap": true, 00:26:33.736 "flush": true, 00:26:33.736 "reset": true, 00:26:33.736 "nvme_admin": false, 00:26:33.736 "nvme_io": false, 00:26:33.736 "nvme_io_md": false, 00:26:33.736 "write_zeroes": true, 00:26:33.736 "zcopy": true, 00:26:33.736 "get_zone_info": false, 00:26:33.736 "zone_management": false, 00:26:33.736 "zone_append": false, 00:26:33.736 "compare": false, 00:26:33.736 "compare_and_write": false, 00:26:33.736 "abort": true, 00:26:33.736 "seek_hole": false, 00:26:33.736 "seek_data": false, 00:26:33.736 "copy": true, 00:26:33.737 "nvme_iov_md": false 00:26:33.737 }, 00:26:33.737 "memory_domains": [ 00:26:33.737 { 00:26:33.737 "dma_device_id": "system", 00:26:33.737 "dma_device_type": 1 00:26:33.737 }, 00:26:33.737 { 00:26:33.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.737 "dma_device_type": 2 00:26:33.737 } 00:26:33.737 ], 00:26:33.737 "driver_specific": { 00:26:33.737 "passthru": { 00:26:33.737 "name": "pt3", 00:26:33.737 "base_bdev_name": "malloc3" 00:26:33.737 } 00:26:33.737 } 00:26:33.737 }' 00:26:33.737 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.737 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:33.737 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:33.737 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:33.737 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:33.737 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:33.737 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:33.737 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:33.737 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:33.737 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.996 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:33.996 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:33.996 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:33.996 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:33.996 11:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:33.996 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:33.996 "name": "pt4", 00:26:33.996 "aliases": [ 00:26:33.996 "00000000-0000-0000-0000-000000000004" 00:26:33.996 ], 00:26:33.996 "product_name": "passthru", 00:26:33.996 "block_size": 512, 00:26:33.996 "num_blocks": 65536, 00:26:33.996 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:33.996 "assigned_rate_limits": { 00:26:33.996 "rw_ios_per_sec": 0, 00:26:33.996 "rw_mbytes_per_sec": 0, 00:26:33.996 "r_mbytes_per_sec": 0, 00:26:33.996 "w_mbytes_per_sec": 0 00:26:33.996 }, 00:26:33.996 "claimed": true, 00:26:33.996 "claim_type": "exclusive_write", 00:26:33.996 "zoned": false, 00:26:33.996 "supported_io_types": { 00:26:33.996 "read": true, 00:26:33.996 "write": true, 00:26:33.996 "unmap": true, 00:26:33.996 "flush": true, 00:26:33.996 "reset": true, 00:26:33.996 "nvme_admin": false, 00:26:33.996 "nvme_io": false, 00:26:33.996 "nvme_io_md": false, 00:26:33.996 "write_zeroes": true, 00:26:33.996 "zcopy": true, 00:26:33.996 "get_zone_info": false, 00:26:33.996 "zone_management": false, 00:26:33.996 "zone_append": false, 00:26:33.996 "compare": false, 00:26:33.996 "compare_and_write": false, 00:26:33.996 "abort": true, 00:26:33.996 "seek_hole": false, 00:26:33.996 "seek_data": false, 00:26:33.996 "copy": true, 00:26:33.996 "nvme_iov_md": false 00:26:33.996 }, 00:26:33.996 "memory_domains": [ 00:26:33.996 { 00:26:33.996 "dma_device_id": "system", 00:26:33.996 "dma_device_type": 1 00:26:33.996 }, 00:26:33.996 { 00:26:33.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:33.996 "dma_device_type": 2 00:26:33.996 } 00:26:33.996 ], 00:26:33.996 "driver_specific": { 00:26:33.996 "passthru": { 00:26:33.996 "name": "pt4", 00:26:33.996 "base_bdev_name": "malloc4" 00:26:33.996 } 00:26:33.996 } 00:26:33.996 }' 00:26:33.996 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:34.255 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:34.255 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:34.255 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.255 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:34.255 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:34.255 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.255 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:34.255 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:34.255 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.514 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:34.514 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:34.514 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:34.514 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:26:34.514 [2024-07-25 11:08:41.629270] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:34.773 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=5611a13b-0118-4f51-8afc-bc2d4e1622a2 00:26:34.773 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' -z 5611a13b-0118-4f51-8afc-bc2d4e1622a2 ']' 00:26:34.773 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:34.773 [2024-07-25 11:08:41.849474] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:34.773 [2024-07-25 11:08:41.849505] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:34.773 [2024-07-25 11:08:41.849592] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:34.773 [2024-07-25 11:08:41.849694] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:34.773 [2024-07-25 11:08:41.849717] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:34.773 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.773 11:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:26:35.032 11:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:26:35.032 11:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:26:35.032 11:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:35.032 11:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:35.291 11:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:35.291 11:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:35.551 11:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:35.551 11:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:35.810 11:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:26:35.810 11:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:36.073 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:36.073 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:36.332 [2024-07-25 11:08:43.377532] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:36.332 [2024-07-25 11:08:43.379837] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:36.332 [2024-07-25 11:08:43.379896] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:36.332 [2024-07-25 11:08:43.379943] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:36.332 [2024-07-25 11:08:43.379998] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:36.332 [2024-07-25 11:08:43.380053] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:36.332 [2024-07-25 11:08:43.380081] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:36.332 [2024-07-25 11:08:43.380111] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:36.332 [2024-07-25 11:08:43.380133] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:36.332 [2024-07-25 11:08:43.380158] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:36.332 request: 00:26:36.332 { 00:26:36.332 "name": "raid_bdev1", 00:26:36.332 "raid_level": "raid1", 00:26:36.332 "base_bdevs": [ 00:26:36.332 "malloc1", 00:26:36.332 "malloc2", 00:26:36.332 "malloc3", 00:26:36.332 "malloc4" 00:26:36.332 ], 00:26:36.332 "superblock": false, 00:26:36.332 "method": "bdev_raid_create", 00:26:36.332 "req_id": 1 00:26:36.332 } 00:26:36.332 Got JSON-RPC error response 00:26:36.332 response: 00:26:36.332 { 00:26:36.332 "code": -17, 00:26:36.332 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:36.332 } 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:36.332 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:26:36.333 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.591 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:26:36.591 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:26:36.591 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:36.850 [2024-07-25 11:08:43.762492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:36.850 [2024-07-25 11:08:43.762556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:36.850 [2024-07-25 11:08:43.762580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:26:36.850 [2024-07-25 11:08:43.762598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:36.850 [2024-07-25 11:08:43.765444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:36.850 [2024-07-25 11:08:43.765483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:36.850 [2024-07-25 11:08:43.765577] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:36.850 [2024-07-25 11:08:43.765652] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:36.850 pt1 00:26:36.850 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:36.850 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:36.850 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:36.850 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:36.850 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:36.850 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:36.850 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:36.850 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:36.850 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:36.850 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:36.850 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.850 11:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.109 11:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:37.109 "name": "raid_bdev1", 00:26:37.109 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:37.109 "strip_size_kb": 0, 00:26:37.109 "state": "configuring", 00:26:37.109 "raid_level": "raid1", 00:26:37.109 "superblock": true, 00:26:37.109 "num_base_bdevs": 4, 00:26:37.109 "num_base_bdevs_discovered": 1, 00:26:37.109 "num_base_bdevs_operational": 4, 00:26:37.109 "base_bdevs_list": [ 00:26:37.109 { 00:26:37.109 "name": "pt1", 00:26:37.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:37.109 "is_configured": true, 00:26:37.109 "data_offset": 2048, 00:26:37.109 "data_size": 63488 00:26:37.109 }, 00:26:37.109 { 00:26:37.109 "name": null, 00:26:37.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:37.109 "is_configured": false, 00:26:37.109 "data_offset": 2048, 00:26:37.109 "data_size": 63488 00:26:37.109 }, 00:26:37.109 { 00:26:37.109 "name": null, 00:26:37.109 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:37.109 "is_configured": false, 00:26:37.109 "data_offset": 2048, 00:26:37.109 "data_size": 63488 00:26:37.109 }, 00:26:37.109 { 00:26:37.109 "name": null, 00:26:37.109 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:37.109 "is_configured": false, 00:26:37.109 "data_offset": 2048, 00:26:37.109 "data_size": 63488 00:26:37.109 } 00:26:37.109 ] 00:26:37.109 }' 00:26:37.109 11:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:37.109 11:08:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.046 11:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@485 -- # '[' 4 -gt 2 ']' 00:26:38.046 11:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:38.046 [2024-07-25 11:08:44.981797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:38.046 [2024-07-25 11:08:44.981863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:38.046 [2024-07-25 11:08:44.981888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:26:38.046 [2024-07-25 11:08:44.981906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:38.046 [2024-07-25 11:08:44.982477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:38.046 [2024-07-25 11:08:44.982506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:38.046 [2024-07-25 11:08:44.982597] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:38.046 [2024-07-25 11:08:44.982630] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:38.046 pt2 00:26:38.046 11:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@488 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:38.305 [2024-07-25 11:08:45.202421] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:38.305 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@489 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:38.305 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:38.305 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:38.305 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:38.305 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:38.305 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:38.305 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:38.305 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:38.305 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:38.305 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:38.305 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.305 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.564 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:38.564 "name": "raid_bdev1", 00:26:38.564 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:38.564 "strip_size_kb": 0, 00:26:38.564 "state": "configuring", 00:26:38.564 "raid_level": "raid1", 00:26:38.564 "superblock": true, 00:26:38.564 "num_base_bdevs": 4, 00:26:38.564 "num_base_bdevs_discovered": 1, 00:26:38.564 "num_base_bdevs_operational": 4, 00:26:38.564 "base_bdevs_list": [ 00:26:38.564 { 00:26:38.564 "name": "pt1", 00:26:38.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:38.564 "is_configured": true, 00:26:38.564 "data_offset": 2048, 00:26:38.564 "data_size": 63488 00:26:38.564 }, 00:26:38.564 { 00:26:38.564 "name": null, 00:26:38.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:38.564 "is_configured": false, 00:26:38.564 "data_offset": 2048, 00:26:38.564 "data_size": 63488 00:26:38.564 }, 00:26:38.564 { 00:26:38.564 "name": null, 00:26:38.565 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:38.565 "is_configured": false, 00:26:38.565 "data_offset": 2048, 00:26:38.565 "data_size": 63488 00:26:38.565 }, 00:26:38.565 { 00:26:38.565 "name": null, 00:26:38.565 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:38.565 "is_configured": false, 00:26:38.565 "data_offset": 2048, 00:26:38.565 "data_size": 63488 00:26:38.565 } 00:26:38.565 ] 00:26:38.565 }' 00:26:38.565 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:38.565 11:08:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.132 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:26:39.132 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:39.132 11:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:39.132 [2024-07-25 11:08:46.201081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:39.132 [2024-07-25 11:08:46.201159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.132 [2024-07-25 11:08:46.201193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:26:39.132 [2024-07-25 11:08:46.201209] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.132 [2024-07-25 11:08:46.201770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.132 [2024-07-25 11:08:46.201794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:39.132 [2024-07-25 11:08:46.201889] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:39.132 [2024-07-25 11:08:46.201915] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:39.132 pt2 00:26:39.132 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:39.132 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:39.132 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:39.392 [2024-07-25 11:08:46.425715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:39.392 [2024-07-25 11:08:46.425772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.392 [2024-07-25 11:08:46.425799] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:26:39.392 [2024-07-25 11:08:46.425815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.392 [2024-07-25 11:08:46.426411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.392 [2024-07-25 11:08:46.426440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:39.392 [2024-07-25 11:08:46.426530] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:39.392 [2024-07-25 11:08:46.426556] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:39.392 pt3 00:26:39.392 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:39.392 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:39.392 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:39.652 [2024-07-25 11:08:46.654353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:39.652 [2024-07-25 11:08:46.654410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:39.652 [2024-07-25 11:08:46.654437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043580 00:26:39.652 [2024-07-25 11:08:46.654453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:39.652 [2024-07-25 11:08:46.654994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:39.652 [2024-07-25 11:08:46.655019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:39.652 [2024-07-25 11:08:46.655115] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:39.652 [2024-07-25 11:08:46.655154] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:39.652 [2024-07-25 11:08:46.655367] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:39.652 [2024-07-25 11:08:46.655381] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:39.652 [2024-07-25 11:08:46.655685] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:26:39.652 [2024-07-25 11:08:46.655930] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:39.652 [2024-07-25 11:08:46.655947] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:39.652 [2024-07-25 11:08:46.656155] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:39.652 pt4 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.652 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:39.911 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:39.911 "name": "raid_bdev1", 00:26:39.911 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:39.911 "strip_size_kb": 0, 00:26:39.911 "state": "online", 00:26:39.911 "raid_level": "raid1", 00:26:39.911 "superblock": true, 00:26:39.911 "num_base_bdevs": 4, 00:26:39.911 "num_base_bdevs_discovered": 4, 00:26:39.911 "num_base_bdevs_operational": 4, 00:26:39.911 "base_bdevs_list": [ 00:26:39.911 { 00:26:39.911 "name": "pt1", 00:26:39.911 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:39.911 "is_configured": true, 00:26:39.911 "data_offset": 2048, 00:26:39.911 "data_size": 63488 00:26:39.911 }, 00:26:39.911 { 00:26:39.911 "name": "pt2", 00:26:39.911 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:39.911 "is_configured": true, 00:26:39.911 "data_offset": 2048, 00:26:39.911 "data_size": 63488 00:26:39.911 }, 00:26:39.911 { 00:26:39.911 "name": "pt3", 00:26:39.911 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:39.911 "is_configured": true, 00:26:39.911 "data_offset": 2048, 00:26:39.911 "data_size": 63488 00:26:39.911 }, 00:26:39.911 { 00:26:39.911 "name": "pt4", 00:26:39.911 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:39.911 "is_configured": true, 00:26:39.911 "data_offset": 2048, 00:26:39.911 "data_size": 63488 00:26:39.911 } 00:26:39.911 ] 00:26:39.911 }' 00:26:39.911 11:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:39.911 11:08:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:40.848 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:26:40.848 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:40.848 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:40.848 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:40.848 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:40.848 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:40.848 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:40.848 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:40.848 [2024-07-25 11:08:47.902101] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:40.848 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:40.848 "name": "raid_bdev1", 00:26:40.848 "aliases": [ 00:26:40.848 "5611a13b-0118-4f51-8afc-bc2d4e1622a2" 00:26:40.848 ], 00:26:40.848 "product_name": "Raid Volume", 00:26:40.848 "block_size": 512, 00:26:40.848 "num_blocks": 63488, 00:26:40.848 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:40.848 "assigned_rate_limits": { 00:26:40.848 "rw_ios_per_sec": 0, 00:26:40.848 "rw_mbytes_per_sec": 0, 00:26:40.848 "r_mbytes_per_sec": 0, 00:26:40.848 "w_mbytes_per_sec": 0 00:26:40.848 }, 00:26:40.848 "claimed": false, 00:26:40.848 "zoned": false, 00:26:40.848 "supported_io_types": { 00:26:40.848 "read": true, 00:26:40.848 "write": true, 00:26:40.848 "unmap": false, 00:26:40.848 "flush": false, 00:26:40.848 "reset": true, 00:26:40.848 "nvme_admin": false, 00:26:40.848 "nvme_io": false, 00:26:40.848 "nvme_io_md": false, 00:26:40.848 "write_zeroes": true, 00:26:40.848 "zcopy": false, 00:26:40.848 "get_zone_info": false, 00:26:40.848 "zone_management": false, 00:26:40.848 "zone_append": false, 00:26:40.848 "compare": false, 00:26:40.848 "compare_and_write": false, 00:26:40.848 "abort": false, 00:26:40.848 "seek_hole": false, 00:26:40.848 "seek_data": false, 00:26:40.848 "copy": false, 00:26:40.848 "nvme_iov_md": false 00:26:40.848 }, 00:26:40.848 "memory_domains": [ 00:26:40.848 { 00:26:40.848 "dma_device_id": "system", 00:26:40.848 "dma_device_type": 1 00:26:40.848 }, 00:26:40.848 { 00:26:40.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.848 "dma_device_type": 2 00:26:40.848 }, 00:26:40.848 { 00:26:40.848 "dma_device_id": "system", 00:26:40.848 "dma_device_type": 1 00:26:40.848 }, 00:26:40.848 { 00:26:40.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.848 "dma_device_type": 2 00:26:40.848 }, 00:26:40.848 { 00:26:40.848 "dma_device_id": "system", 00:26:40.848 "dma_device_type": 1 00:26:40.848 }, 00:26:40.848 { 00:26:40.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.848 "dma_device_type": 2 00:26:40.848 }, 00:26:40.848 { 00:26:40.848 "dma_device_id": "system", 00:26:40.848 "dma_device_type": 1 00:26:40.848 }, 00:26:40.848 { 00:26:40.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.848 "dma_device_type": 2 00:26:40.848 } 00:26:40.848 ], 00:26:40.848 "driver_specific": { 00:26:40.848 "raid": { 00:26:40.848 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:40.848 "strip_size_kb": 0, 00:26:40.848 "state": "online", 00:26:40.848 "raid_level": "raid1", 00:26:40.848 "superblock": true, 00:26:40.848 "num_base_bdevs": 4, 00:26:40.848 "num_base_bdevs_discovered": 4, 00:26:40.848 "num_base_bdevs_operational": 4, 00:26:40.848 "base_bdevs_list": [ 00:26:40.848 { 00:26:40.848 "name": "pt1", 00:26:40.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:40.848 "is_configured": true, 00:26:40.848 "data_offset": 2048, 00:26:40.848 "data_size": 63488 00:26:40.848 }, 00:26:40.848 { 00:26:40.848 "name": "pt2", 00:26:40.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:40.848 "is_configured": true, 00:26:40.848 "data_offset": 2048, 00:26:40.848 "data_size": 63488 00:26:40.848 }, 00:26:40.848 { 00:26:40.848 "name": "pt3", 00:26:40.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:40.848 "is_configured": true, 00:26:40.848 "data_offset": 2048, 00:26:40.848 "data_size": 63488 00:26:40.848 }, 00:26:40.848 { 00:26:40.848 "name": "pt4", 00:26:40.848 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:40.848 "is_configured": true, 00:26:40.848 "data_offset": 2048, 00:26:40.848 "data_size": 63488 00:26:40.848 } 00:26:40.848 ] 00:26:40.848 } 00:26:40.848 } 00:26:40.848 }' 00:26:40.848 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:41.108 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:41.108 pt2 00:26:41.108 pt3 00:26:41.108 pt4' 00:26:41.108 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:41.108 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:41.108 11:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:41.367 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:41.367 "name": "pt1", 00:26:41.367 "aliases": [ 00:26:41.367 "00000000-0000-0000-0000-000000000001" 00:26:41.367 ], 00:26:41.367 "product_name": "passthru", 00:26:41.367 "block_size": 512, 00:26:41.367 "num_blocks": 65536, 00:26:41.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:41.367 "assigned_rate_limits": { 00:26:41.367 "rw_ios_per_sec": 0, 00:26:41.367 "rw_mbytes_per_sec": 0, 00:26:41.367 "r_mbytes_per_sec": 0, 00:26:41.367 "w_mbytes_per_sec": 0 00:26:41.367 }, 00:26:41.367 "claimed": true, 00:26:41.367 "claim_type": "exclusive_write", 00:26:41.367 "zoned": false, 00:26:41.367 "supported_io_types": { 00:26:41.367 "read": true, 00:26:41.367 "write": true, 00:26:41.367 "unmap": true, 00:26:41.367 "flush": true, 00:26:41.367 "reset": true, 00:26:41.367 "nvme_admin": false, 00:26:41.367 "nvme_io": false, 00:26:41.367 "nvme_io_md": false, 00:26:41.367 "write_zeroes": true, 00:26:41.367 "zcopy": true, 00:26:41.367 "get_zone_info": false, 00:26:41.367 "zone_management": false, 00:26:41.367 "zone_append": false, 00:26:41.367 "compare": false, 00:26:41.367 "compare_and_write": false, 00:26:41.367 "abort": true, 00:26:41.367 "seek_hole": false, 00:26:41.367 "seek_data": false, 00:26:41.367 "copy": true, 00:26:41.367 "nvme_iov_md": false 00:26:41.367 }, 00:26:41.367 "memory_domains": [ 00:26:41.367 { 00:26:41.367 "dma_device_id": "system", 00:26:41.367 "dma_device_type": 1 00:26:41.367 }, 00:26:41.367 { 00:26:41.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.367 "dma_device_type": 2 00:26:41.367 } 00:26:41.367 ], 00:26:41.367 "driver_specific": { 00:26:41.367 "passthru": { 00:26:41.367 "name": "pt1", 00:26:41.367 "base_bdev_name": "malloc1" 00:26:41.367 } 00:26:41.367 } 00:26:41.367 }' 00:26:41.367 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.626 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:41.626 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:41.626 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.626 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:41.626 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:41.626 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.626 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:41.626 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:41.626 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:41.885 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:41.885 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:41.885 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:41.885 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:41.885 11:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:42.142 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:42.142 "name": "pt2", 00:26:42.142 "aliases": [ 00:26:42.143 "00000000-0000-0000-0000-000000000002" 00:26:42.143 ], 00:26:42.143 "product_name": "passthru", 00:26:42.143 "block_size": 512, 00:26:42.143 "num_blocks": 65536, 00:26:42.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:42.143 "assigned_rate_limits": { 00:26:42.143 "rw_ios_per_sec": 0, 00:26:42.143 "rw_mbytes_per_sec": 0, 00:26:42.143 "r_mbytes_per_sec": 0, 00:26:42.143 "w_mbytes_per_sec": 0 00:26:42.143 }, 00:26:42.143 "claimed": true, 00:26:42.143 "claim_type": "exclusive_write", 00:26:42.143 "zoned": false, 00:26:42.143 "supported_io_types": { 00:26:42.143 "read": true, 00:26:42.143 "write": true, 00:26:42.143 "unmap": true, 00:26:42.143 "flush": true, 00:26:42.143 "reset": true, 00:26:42.143 "nvme_admin": false, 00:26:42.143 "nvme_io": false, 00:26:42.143 "nvme_io_md": false, 00:26:42.143 "write_zeroes": true, 00:26:42.143 "zcopy": true, 00:26:42.143 "get_zone_info": false, 00:26:42.143 "zone_management": false, 00:26:42.143 "zone_append": false, 00:26:42.143 "compare": false, 00:26:42.143 "compare_and_write": false, 00:26:42.143 "abort": true, 00:26:42.143 "seek_hole": false, 00:26:42.143 "seek_data": false, 00:26:42.143 "copy": true, 00:26:42.143 "nvme_iov_md": false 00:26:42.143 }, 00:26:42.143 "memory_domains": [ 00:26:42.143 { 00:26:42.143 "dma_device_id": "system", 00:26:42.143 "dma_device_type": 1 00:26:42.143 }, 00:26:42.143 { 00:26:42.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.143 "dma_device_type": 2 00:26:42.143 } 00:26:42.143 ], 00:26:42.143 "driver_specific": { 00:26:42.143 "passthru": { 00:26:42.143 "name": "pt2", 00:26:42.143 "base_bdev_name": "malloc2" 00:26:42.143 } 00:26:42.143 } 00:26:42.143 }' 00:26:42.143 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:42.143 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:42.143 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:42.143 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:42.143 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:42.143 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:42.143 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.402 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.402 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:42.402 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.402 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.402 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:42.402 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:42.402 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:42.402 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:42.661 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:42.661 "name": "pt3", 00:26:42.661 "aliases": [ 00:26:42.661 "00000000-0000-0000-0000-000000000003" 00:26:42.661 ], 00:26:42.661 "product_name": "passthru", 00:26:42.661 "block_size": 512, 00:26:42.661 "num_blocks": 65536, 00:26:42.661 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:42.661 "assigned_rate_limits": { 00:26:42.661 "rw_ios_per_sec": 0, 00:26:42.661 "rw_mbytes_per_sec": 0, 00:26:42.661 "r_mbytes_per_sec": 0, 00:26:42.661 "w_mbytes_per_sec": 0 00:26:42.661 }, 00:26:42.661 "claimed": true, 00:26:42.661 "claim_type": "exclusive_write", 00:26:42.661 "zoned": false, 00:26:42.661 "supported_io_types": { 00:26:42.661 "read": true, 00:26:42.661 "write": true, 00:26:42.661 "unmap": true, 00:26:42.661 "flush": true, 00:26:42.661 "reset": true, 00:26:42.662 "nvme_admin": false, 00:26:42.662 "nvme_io": false, 00:26:42.662 "nvme_io_md": false, 00:26:42.662 "write_zeroes": true, 00:26:42.662 "zcopy": true, 00:26:42.662 "get_zone_info": false, 00:26:42.662 "zone_management": false, 00:26:42.662 "zone_append": false, 00:26:42.662 "compare": false, 00:26:42.662 "compare_and_write": false, 00:26:42.662 "abort": true, 00:26:42.662 "seek_hole": false, 00:26:42.662 "seek_data": false, 00:26:42.662 "copy": true, 00:26:42.662 "nvme_iov_md": false 00:26:42.662 }, 00:26:42.662 "memory_domains": [ 00:26:42.662 { 00:26:42.662 "dma_device_id": "system", 00:26:42.662 "dma_device_type": 1 00:26:42.662 }, 00:26:42.662 { 00:26:42.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.662 "dma_device_type": 2 00:26:42.662 } 00:26:42.662 ], 00:26:42.662 "driver_specific": { 00:26:42.662 "passthru": { 00:26:42.662 "name": "pt3", 00:26:42.662 "base_bdev_name": "malloc3" 00:26:42.662 } 00:26:42.662 } 00:26:42.662 }' 00:26:42.662 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:42.662 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:42.662 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:42.662 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:42.662 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:42.662 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:42.662 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.662 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:42.662 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:42.662 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.921 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:42.921 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:42.921 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:42.921 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:42.921 11:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:43.195 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:43.195 "name": "pt4", 00:26:43.195 "aliases": [ 00:26:43.195 "00000000-0000-0000-0000-000000000004" 00:26:43.195 ], 00:26:43.195 "product_name": "passthru", 00:26:43.195 "block_size": 512, 00:26:43.195 "num_blocks": 65536, 00:26:43.195 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:43.195 "assigned_rate_limits": { 00:26:43.195 "rw_ios_per_sec": 0, 00:26:43.195 "rw_mbytes_per_sec": 0, 00:26:43.195 "r_mbytes_per_sec": 0, 00:26:43.195 "w_mbytes_per_sec": 0 00:26:43.195 }, 00:26:43.195 "claimed": true, 00:26:43.195 "claim_type": "exclusive_write", 00:26:43.195 "zoned": false, 00:26:43.195 "supported_io_types": { 00:26:43.195 "read": true, 00:26:43.195 "write": true, 00:26:43.195 "unmap": true, 00:26:43.195 "flush": true, 00:26:43.195 "reset": true, 00:26:43.195 "nvme_admin": false, 00:26:43.195 "nvme_io": false, 00:26:43.195 "nvme_io_md": false, 00:26:43.195 "write_zeroes": true, 00:26:43.195 "zcopy": true, 00:26:43.195 "get_zone_info": false, 00:26:43.195 "zone_management": false, 00:26:43.195 "zone_append": false, 00:26:43.195 "compare": false, 00:26:43.195 "compare_and_write": false, 00:26:43.195 "abort": true, 00:26:43.195 "seek_hole": false, 00:26:43.195 "seek_data": false, 00:26:43.195 "copy": true, 00:26:43.195 "nvme_iov_md": false 00:26:43.195 }, 00:26:43.195 "memory_domains": [ 00:26:43.195 { 00:26:43.195 "dma_device_id": "system", 00:26:43.195 "dma_device_type": 1 00:26:43.195 }, 00:26:43.195 { 00:26:43.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:43.195 "dma_device_type": 2 00:26:43.195 } 00:26:43.195 ], 00:26:43.195 "driver_specific": { 00:26:43.195 "passthru": { 00:26:43.195 "name": "pt4", 00:26:43.196 "base_bdev_name": "malloc4" 00:26:43.196 } 00:26:43.196 } 00:26:43.196 }' 00:26:43.196 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.196 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:43.196 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:43.196 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:43.196 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:43.196 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:43.196 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:43.490 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:43.490 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:43.490 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:43.490 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:43.490 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:43.490 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:43.490 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:26:43.748 [2024-07-25 11:08:50.649651] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:43.749 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@502 -- # '[' 5611a13b-0118-4f51-8afc-bc2d4e1622a2 '!=' 5611a13b-0118-4f51-8afc-bc2d4e1622a2 ']' 00:26:43.749 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:26:43.749 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:43.749 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:43.749 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:44.008 [2024-07-25 11:08:50.869775] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:26:44.008 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:44.008 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:44.008 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:44.008 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:44.008 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:44.008 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:44.008 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.008 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.008 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.008 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.008 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.008 11:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.576 11:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:44.576 "name": "raid_bdev1", 00:26:44.576 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:44.576 "strip_size_kb": 0, 00:26:44.576 "state": "online", 00:26:44.576 "raid_level": "raid1", 00:26:44.576 "superblock": true, 00:26:44.576 "num_base_bdevs": 4, 00:26:44.576 "num_base_bdevs_discovered": 3, 00:26:44.576 "num_base_bdevs_operational": 3, 00:26:44.576 "base_bdevs_list": [ 00:26:44.576 { 00:26:44.576 "name": null, 00:26:44.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:44.576 "is_configured": false, 00:26:44.576 "data_offset": 2048, 00:26:44.576 "data_size": 63488 00:26:44.576 }, 00:26:44.576 { 00:26:44.576 "name": "pt2", 00:26:44.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:44.576 "is_configured": true, 00:26:44.576 "data_offset": 2048, 00:26:44.576 "data_size": 63488 00:26:44.576 }, 00:26:44.576 { 00:26:44.576 "name": "pt3", 00:26:44.576 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:44.576 "is_configured": true, 00:26:44.576 "data_offset": 2048, 00:26:44.576 "data_size": 63488 00:26:44.576 }, 00:26:44.576 { 00:26:44.576 "name": "pt4", 00:26:44.576 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:44.576 "is_configured": true, 00:26:44.576 "data_offset": 2048, 00:26:44.576 "data_size": 63488 00:26:44.576 } 00:26:44.576 ] 00:26:44.576 }' 00:26:44.576 11:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:44.576 11:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.142 11:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:45.142 [2024-07-25 11:08:52.181275] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:45.142 [2024-07-25 11:08:52.181313] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:45.142 [2024-07-25 11:08:52.181398] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:45.142 [2024-07-25 11:08:52.181488] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:45.142 [2024-07-25 11:08:52.181505] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:45.142 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.142 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:26:45.410 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:26:45.410 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:26:45.410 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:26:45.410 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:45.410 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:45.675 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:45.675 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:45.675 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:45.934 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:45.934 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:45.934 11:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:46.193 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:26:46.193 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:26:46.193 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:26:46.193 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:46.193 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:46.193 [2024-07-25 11:08:53.308229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:46.193 [2024-07-25 11:08:53.308282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:46.193 [2024-07-25 11:08:53.308311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:26:46.193 [2024-07-25 11:08:53.308328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:46.193 [2024-07-25 11:08:53.311063] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:46.193 [2024-07-25 11:08:53.311097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:46.193 [2024-07-25 11:08:53.311199] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:46.193 [2024-07-25 11:08:53.311263] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:46.452 pt2 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:46.452 "name": "raid_bdev1", 00:26:46.452 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:46.452 "strip_size_kb": 0, 00:26:46.452 "state": "configuring", 00:26:46.452 "raid_level": "raid1", 00:26:46.452 "superblock": true, 00:26:46.452 "num_base_bdevs": 4, 00:26:46.452 "num_base_bdevs_discovered": 1, 00:26:46.452 "num_base_bdevs_operational": 3, 00:26:46.452 "base_bdevs_list": [ 00:26:46.452 { 00:26:46.452 "name": null, 00:26:46.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:46.452 "is_configured": false, 00:26:46.452 "data_offset": 2048, 00:26:46.452 "data_size": 63488 00:26:46.452 }, 00:26:46.452 { 00:26:46.452 "name": "pt2", 00:26:46.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:46.452 "is_configured": true, 00:26:46.452 "data_offset": 2048, 00:26:46.452 "data_size": 63488 00:26:46.452 }, 00:26:46.452 { 00:26:46.452 "name": null, 00:26:46.452 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:46.452 "is_configured": false, 00:26:46.452 "data_offset": 2048, 00:26:46.452 "data_size": 63488 00:26:46.452 }, 00:26:46.452 { 00:26:46.452 "name": null, 00:26:46.452 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:46.452 "is_configured": false, 00:26:46.452 "data_offset": 2048, 00:26:46.452 "data_size": 63488 00:26:46.452 } 00:26:46.452 ] 00:26:46.452 }' 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:46.452 11:08:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.022 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:26:47.022 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:47.022 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:47.281 [2024-07-25 11:08:54.347179] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:47.281 [2024-07-25 11:08:54.347243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:47.281 [2024-07-25 11:08:54.347271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:26:47.281 [2024-07-25 11:08:54.347286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:47.281 [2024-07-25 11:08:54.347856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:47.281 [2024-07-25 11:08:54.347888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:47.281 [2024-07-25 11:08:54.347979] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:47.281 [2024-07-25 11:08:54.348005] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:47.281 pt3 00:26:47.281 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@530 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:47.281 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:47.281 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:47.281 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:47.281 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:47.281 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:47.281 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:47.281 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:47.281 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:47.281 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:47.281 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:47.281 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.540 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:47.540 "name": "raid_bdev1", 00:26:47.540 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:47.540 "strip_size_kb": 0, 00:26:47.540 "state": "configuring", 00:26:47.540 "raid_level": "raid1", 00:26:47.540 "superblock": true, 00:26:47.540 "num_base_bdevs": 4, 00:26:47.540 "num_base_bdevs_discovered": 2, 00:26:47.540 "num_base_bdevs_operational": 3, 00:26:47.540 "base_bdevs_list": [ 00:26:47.540 { 00:26:47.540 "name": null, 00:26:47.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:47.540 "is_configured": false, 00:26:47.540 "data_offset": 2048, 00:26:47.540 "data_size": 63488 00:26:47.540 }, 00:26:47.540 { 00:26:47.540 "name": "pt2", 00:26:47.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:47.540 "is_configured": true, 00:26:47.540 "data_offset": 2048, 00:26:47.540 "data_size": 63488 00:26:47.540 }, 00:26:47.540 { 00:26:47.540 "name": "pt3", 00:26:47.540 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:47.540 "is_configured": true, 00:26:47.540 "data_offset": 2048, 00:26:47.540 "data_size": 63488 00:26:47.540 }, 00:26:47.540 { 00:26:47.540 "name": null, 00:26:47.540 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:47.540 "is_configured": false, 00:26:47.540 "data_offset": 2048, 00:26:47.540 "data_size": 63488 00:26:47.540 } 00:26:47.540 ] 00:26:47.540 }' 00:26:47.540 11:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:47.540 11:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:48.107 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i++ )) 00:26:48.107 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:26:48.107 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:26:48.107 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:48.365 [2024-07-25 11:08:55.378100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:48.365 [2024-07-25 11:08:55.378172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:48.365 [2024-07-25 11:08:55.378201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044180 00:26:48.365 [2024-07-25 11:08:55.378217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:48.365 [2024-07-25 11:08:55.378786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:48.365 [2024-07-25 11:08:55.378812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:48.365 [2024-07-25 11:08:55.378901] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:48.365 [2024-07-25 11:08:55.378932] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:48.365 [2024-07-25 11:08:55.379106] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:26:48.365 [2024-07-25 11:08:55.379120] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:48.366 [2024-07-25 11:08:55.379461] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:26:48.366 [2024-07-25 11:08:55.379684] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:26:48.366 [2024-07-25 11:08:55.379740] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:26:48.366 [2024-07-25 11:08:55.379908] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:48.366 pt4 00:26:48.366 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:48.366 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:48.366 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:48.366 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:48.366 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:48.366 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:48.366 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:48.366 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:48.366 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:48.366 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:48.366 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:48.366 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:48.625 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:48.625 "name": "raid_bdev1", 00:26:48.625 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:48.625 "strip_size_kb": 0, 00:26:48.625 "state": "online", 00:26:48.625 "raid_level": "raid1", 00:26:48.625 "superblock": true, 00:26:48.625 "num_base_bdevs": 4, 00:26:48.625 "num_base_bdevs_discovered": 3, 00:26:48.625 "num_base_bdevs_operational": 3, 00:26:48.625 "base_bdevs_list": [ 00:26:48.625 { 00:26:48.625 "name": null, 00:26:48.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:48.625 "is_configured": false, 00:26:48.625 "data_offset": 2048, 00:26:48.625 "data_size": 63488 00:26:48.625 }, 00:26:48.625 { 00:26:48.625 "name": "pt2", 00:26:48.625 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:48.625 "is_configured": true, 00:26:48.625 "data_offset": 2048, 00:26:48.625 "data_size": 63488 00:26:48.625 }, 00:26:48.625 { 00:26:48.625 "name": "pt3", 00:26:48.625 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:48.625 "is_configured": true, 00:26:48.625 "data_offset": 2048, 00:26:48.625 "data_size": 63488 00:26:48.625 }, 00:26:48.625 { 00:26:48.625 "name": "pt4", 00:26:48.625 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:48.625 "is_configured": true, 00:26:48.625 "data_offset": 2048, 00:26:48.625 "data_size": 63488 00:26:48.625 } 00:26:48.625 ] 00:26:48.625 }' 00:26:48.625 11:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:48.625 11:08:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.193 11:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:49.451 [2024-07-25 11:08:56.400851] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:49.451 [2024-07-25 11:08:56.400882] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:49.452 [2024-07-25 11:08:56.400965] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:49.452 [2024-07-25 11:08:56.401053] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:49.452 [2024-07-25 11:08:56.401072] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:26:49.452 11:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:49.452 11:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:26:49.710 11:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:26:49.710 11:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:26:49.710 11:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@547 -- # '[' 4 -gt 2 ']' 00:26:49.710 11:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # i=3 00:26:49.710 11:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:49.968 11:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:49.968 [2024-07-25 11:08:57.086654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:49.968 [2024-07-25 11:08:57.086717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:49.968 [2024-07-25 11:08:57.086739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044480 00:26:49.968 [2024-07-25 11:08:57.086758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:50.227 [2024-07-25 11:08:57.089502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:50.227 [2024-07-25 11:08:57.089538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:50.227 [2024-07-25 11:08:57.089622] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:50.227 [2024-07-25 11:08:57.089672] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:50.227 [2024-07-25 11:08:57.089856] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:26:50.227 [2024-07-25 11:08:57.089880] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:50.227 [2024-07-25 11:08:57.089900] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:26:50.227 [2024-07-25 11:08:57.089968] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:50.227 [2024-07-25 11:08:57.090083] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:50.227 pt1 00:26:50.227 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 4 -gt 2 ']' 00:26:50.227 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@560 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:26:50.227 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:50.227 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:50.227 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:50.227 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:50.227 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:50.227 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:50.228 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:50.228 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:50.228 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:50.228 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.228 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.228 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:50.228 "name": "raid_bdev1", 00:26:50.228 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:50.228 "strip_size_kb": 0, 00:26:50.228 "state": "configuring", 00:26:50.228 "raid_level": "raid1", 00:26:50.228 "superblock": true, 00:26:50.228 "num_base_bdevs": 4, 00:26:50.228 "num_base_bdevs_discovered": 2, 00:26:50.228 "num_base_bdevs_operational": 3, 00:26:50.228 "base_bdevs_list": [ 00:26:50.228 { 00:26:50.228 "name": null, 00:26:50.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:50.228 "is_configured": false, 00:26:50.228 "data_offset": 2048, 00:26:50.228 "data_size": 63488 00:26:50.228 }, 00:26:50.228 { 00:26:50.228 "name": "pt2", 00:26:50.228 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:50.228 "is_configured": true, 00:26:50.228 "data_offset": 2048, 00:26:50.228 "data_size": 63488 00:26:50.228 }, 00:26:50.228 { 00:26:50.228 "name": "pt3", 00:26:50.228 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:50.228 "is_configured": true, 00:26:50.228 "data_offset": 2048, 00:26:50.228 "data_size": 63488 00:26:50.228 }, 00:26:50.228 { 00:26:50.228 "name": null, 00:26:50.228 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:50.228 "is_configured": false, 00:26:50.228 "data_offset": 2048, 00:26:50.228 "data_size": 63488 00:26:50.228 } 00:26:50.228 ] 00:26:50.228 }' 00:26:50.228 11:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:50.228 11:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:51.163 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:51.163 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:26:51.422 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@561 -- # [[ false == \f\a\l\s\e ]] 00:26:51.422 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:51.681 [2024-07-25 11:08:58.642864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:51.681 [2024-07-25 11:08:58.642926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:51.681 [2024-07-25 11:08:58.642954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044a80 00:26:51.681 [2024-07-25 11:08:58.642969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:51.681 [2024-07-25 11:08:58.643562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:51.681 [2024-07-25 11:08:58.643587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:51.681 [2024-07-25 11:08:58.643677] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:51.681 [2024-07-25 11:08:58.643703] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:51.681 [2024-07-25 11:08:58.643887] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:26:51.681 [2024-07-25 11:08:58.643902] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:51.681 [2024-07-25 11:08:58.644202] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:26:51.681 [2024-07-25 11:08:58.644415] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:26:51.681 [2024-07-25 11:08:58.644432] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:26:51.681 [2024-07-25 11:08:58.644618] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:51.681 pt4 00:26:51.681 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:51.681 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:51.681 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:51.681 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:51.681 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:51.681 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:51.681 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:51.681 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:51.681 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:51.681 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:51.681 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:51.681 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:51.941 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:51.941 "name": "raid_bdev1", 00:26:51.941 "uuid": "5611a13b-0118-4f51-8afc-bc2d4e1622a2", 00:26:51.941 "strip_size_kb": 0, 00:26:51.941 "state": "online", 00:26:51.941 "raid_level": "raid1", 00:26:51.941 "superblock": true, 00:26:51.941 "num_base_bdevs": 4, 00:26:51.941 "num_base_bdevs_discovered": 3, 00:26:51.941 "num_base_bdevs_operational": 3, 00:26:51.941 "base_bdevs_list": [ 00:26:51.941 { 00:26:51.941 "name": null, 00:26:51.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:51.941 "is_configured": false, 00:26:51.941 "data_offset": 2048, 00:26:51.941 "data_size": 63488 00:26:51.941 }, 00:26:51.941 { 00:26:51.941 "name": "pt2", 00:26:51.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:51.941 "is_configured": true, 00:26:51.941 "data_offset": 2048, 00:26:51.941 "data_size": 63488 00:26:51.941 }, 00:26:51.941 { 00:26:51.941 "name": "pt3", 00:26:51.941 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:51.941 "is_configured": true, 00:26:51.941 "data_offset": 2048, 00:26:51.941 "data_size": 63488 00:26:51.941 }, 00:26:51.941 { 00:26:51.941 "name": "pt4", 00:26:51.941 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:51.941 "is_configured": true, 00:26:51.941 "data_offset": 2048, 00:26:51.941 "data_size": 63488 00:26:51.941 } 00:26:51.941 ] 00:26:51.941 }' 00:26:51.941 11:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:51.941 11:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.508 11:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:52.508 11:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:52.767 11:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:26:52.767 11:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:52.767 11:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:26:52.767 [2024-07-25 11:08:59.882615] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:53.026 11:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@573 -- # '[' 5611a13b-0118-4f51-8afc-bc2d4e1622a2 '!=' 5611a13b-0118-4f51-8afc-bc2d4e1622a2 ']' 00:26:53.026 11:08:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@578 -- # killprocess 3686342 00:26:53.026 11:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 3686342 ']' 00:26:53.026 11:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 3686342 00:26:53.026 11:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:26:53.026 11:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:53.027 11:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3686342 00:26:53.027 11:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:53.027 11:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:53.027 11:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3686342' 00:26:53.027 killing process with pid 3686342 00:26:53.027 11:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 3686342 00:26:53.027 [2024-07-25 11:08:59.958598] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:53.027 [2024-07-25 11:08:59.958698] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:53.027 11:08:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 3686342 00:26:53.027 [2024-07-25 11:08:59.958786] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:53.027 [2024-07-25 11:08:59.958810] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:26:53.595 [2024-07-25 11:09:00.435316] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:55.503 11:09:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@580 -- # return 0 00:26:55.503 00:26:55.503 real 0m27.581s 00:26:55.503 user 0m48.135s 00:26:55.503 sys 0m4.665s 00:26:55.503 11:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:55.503 11:09:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.503 ************************************ 00:26:55.503 END TEST raid_superblock_test 00:26:55.503 ************************************ 00:26:55.503 11:09:02 bdev_raid -- bdev/bdev_raid.sh@950 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:26:55.503 11:09:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:26:55.503 11:09:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:55.503 11:09:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:55.503 ************************************ 00:26:55.503 START TEST raid_read_error_test 00:26:55.503 ************************************ 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=read 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.IHNq0g0oXG 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3691502 00:26:55.503 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3691502 /var/tmp/spdk-raid.sock 00:26:55.504 11:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:55.504 11:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 3691502 ']' 00:26:55.504 11:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:55.504 11:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:55.504 11:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:55.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:55.504 11:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:55.504 11:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.504 [2024-07-25 11:09:02.364861] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:55.504 [2024-07-25 11:09:02.364989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3691502 ] 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:01.0 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:01.1 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:01.2 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:01.3 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:01.4 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:01.5 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:01.6 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:01.7 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:02.0 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:02.1 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:02.2 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:02.3 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:02.4 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:02.5 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:02.6 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3d:02.7 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:01.0 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:01.1 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:01.2 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:01.3 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:01.4 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:01.5 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:01.6 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:01.7 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:02.0 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:02.1 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:02.2 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:02.3 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:02.4 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:02.5 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:02.6 cannot be used 00:26:55.504 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:26:55.504 EAL: Requested device 0000:3f:02.7 cannot be used 00:26:55.504 [2024-07-25 11:09:02.590530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.763 [2024-07-25 11:09:02.871464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.332 [2024-07-25 11:09:03.209817] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:56.332 [2024-07-25 11:09:03.209853] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:56.332 11:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:56.332 11:09:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:26:56.333 11:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:56.333 11:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:56.591 BaseBdev1_malloc 00:26:56.591 11:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:56.851 true 00:26:56.851 11:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:57.110 [2024-07-25 11:09:04.105445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:57.110 [2024-07-25 11:09:04.105509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:57.110 [2024-07-25 11:09:04.105537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:26:57.110 [2024-07-25 11:09:04.105560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:57.110 [2024-07-25 11:09:04.108379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:57.110 [2024-07-25 11:09:04.108419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:57.110 BaseBdev1 00:26:57.110 11:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:57.110 11:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:57.370 BaseBdev2_malloc 00:26:57.370 11:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:57.629 true 00:26:57.629 11:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:57.888 [2024-07-25 11:09:04.826650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:57.888 [2024-07-25 11:09:04.826712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:57.888 [2024-07-25 11:09:04.826738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:26:57.888 [2024-07-25 11:09:04.826759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:57.888 [2024-07-25 11:09:04.829566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:57.888 [2024-07-25 11:09:04.829604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:57.888 BaseBdev2 00:26:57.888 11:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:57.888 11:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:58.176 BaseBdev3_malloc 00:26:58.176 11:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:58.435 true 00:26:58.435 11:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:58.435 [2024-07-25 11:09:05.553430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:58.435 [2024-07-25 11:09:05.553490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:58.435 [2024-07-25 11:09:05.553518] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:26:58.435 [2024-07-25 11:09:05.553536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:58.695 [2024-07-25 11:09:05.556318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:58.695 [2024-07-25 11:09:05.556354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:58.695 BaseBdev3 00:26:58.695 11:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:26:58.695 11:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:58.954 BaseBdev4_malloc 00:26:58.954 11:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:58.954 true 00:26:59.213 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:59.213 [2024-07-25 11:09:06.292971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:59.213 [2024-07-25 11:09:06.293037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:59.213 [2024-07-25 11:09:06.293066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:26:59.213 [2024-07-25 11:09:06.293085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:59.213 [2024-07-25 11:09:06.295883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:59.213 [2024-07-25 11:09:06.295922] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:59.213 BaseBdev4 00:26:59.213 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:59.473 [2024-07-25 11:09:06.517612] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:59.473 [2024-07-25 11:09:06.520000] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:59.473 [2024-07-25 11:09:06.520101] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:59.473 [2024-07-25 11:09:06.520191] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:59.473 [2024-07-25 11:09:06.520466] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:26:59.473 [2024-07-25 11:09:06.520486] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:59.473 [2024-07-25 11:09:06.520838] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:26:59.473 [2024-07-25 11:09:06.521122] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:26:59.473 [2024-07-25 11:09:06.521149] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:26:59.473 [2024-07-25 11:09:06.521380] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:59.473 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:59.473 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:59.473 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:59.473 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:59.473 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:59.473 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:59.473 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:59.473 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:59.473 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:59.473 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:59.473 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.473 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.732 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:59.732 "name": "raid_bdev1", 00:26:59.732 "uuid": "f7e0bb16-81ca-4768-9a56-9a78b1675dae", 00:26:59.732 "strip_size_kb": 0, 00:26:59.732 "state": "online", 00:26:59.732 "raid_level": "raid1", 00:26:59.732 "superblock": true, 00:26:59.732 "num_base_bdevs": 4, 00:26:59.732 "num_base_bdevs_discovered": 4, 00:26:59.732 "num_base_bdevs_operational": 4, 00:26:59.732 "base_bdevs_list": [ 00:26:59.732 { 00:26:59.732 "name": "BaseBdev1", 00:26:59.732 "uuid": "0245dcb9-a603-5ca7-998a-c945b19585f1", 00:26:59.732 "is_configured": true, 00:26:59.732 "data_offset": 2048, 00:26:59.732 "data_size": 63488 00:26:59.732 }, 00:26:59.732 { 00:26:59.732 "name": "BaseBdev2", 00:26:59.732 "uuid": "ce6aae4f-b4bc-5500-970a-ba5c7e9fbdef", 00:26:59.732 "is_configured": true, 00:26:59.732 "data_offset": 2048, 00:26:59.733 "data_size": 63488 00:26:59.733 }, 00:26:59.733 { 00:26:59.733 "name": "BaseBdev3", 00:26:59.733 "uuid": "f29d98c8-251f-5c03-98fb-1d22bec9fe5d", 00:26:59.733 "is_configured": true, 00:26:59.733 "data_offset": 2048, 00:26:59.733 "data_size": 63488 00:26:59.733 }, 00:26:59.733 { 00:26:59.733 "name": "BaseBdev4", 00:26:59.733 "uuid": "a581c329-d2f0-577c-abdb-5f970c1d188e", 00:26:59.733 "is_configured": true, 00:26:59.733 "data_offset": 2048, 00:26:59.733 "data_size": 63488 00:26:59.733 } 00:26:59.733 ] 00:26:59.733 }' 00:26:59.733 11:09:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:59.733 11:09:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.301 11:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:27:00.301 11:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:00.561 [2024-07-25 11:09:07.426096] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # [[ read = \w\r\i\t\e ]] 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # expected_num_base_bdevs=4 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.499 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.758 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:01.758 "name": "raid_bdev1", 00:27:01.758 "uuid": "f7e0bb16-81ca-4768-9a56-9a78b1675dae", 00:27:01.758 "strip_size_kb": 0, 00:27:01.758 "state": "online", 00:27:01.758 "raid_level": "raid1", 00:27:01.758 "superblock": true, 00:27:01.758 "num_base_bdevs": 4, 00:27:01.758 "num_base_bdevs_discovered": 4, 00:27:01.758 "num_base_bdevs_operational": 4, 00:27:01.758 "base_bdevs_list": [ 00:27:01.758 { 00:27:01.758 "name": "BaseBdev1", 00:27:01.758 "uuid": "0245dcb9-a603-5ca7-998a-c945b19585f1", 00:27:01.758 "is_configured": true, 00:27:01.758 "data_offset": 2048, 00:27:01.758 "data_size": 63488 00:27:01.758 }, 00:27:01.758 { 00:27:01.758 "name": "BaseBdev2", 00:27:01.758 "uuid": "ce6aae4f-b4bc-5500-970a-ba5c7e9fbdef", 00:27:01.758 "is_configured": true, 00:27:01.758 "data_offset": 2048, 00:27:01.758 "data_size": 63488 00:27:01.758 }, 00:27:01.758 { 00:27:01.758 "name": "BaseBdev3", 00:27:01.758 "uuid": "f29d98c8-251f-5c03-98fb-1d22bec9fe5d", 00:27:01.758 "is_configured": true, 00:27:01.758 "data_offset": 2048, 00:27:01.758 "data_size": 63488 00:27:01.758 }, 00:27:01.758 { 00:27:01.758 "name": "BaseBdev4", 00:27:01.758 "uuid": "a581c329-d2f0-577c-abdb-5f970c1d188e", 00:27:01.758 "is_configured": true, 00:27:01.758 "data_offset": 2048, 00:27:01.758 "data_size": 63488 00:27:01.758 } 00:27:01.758 ] 00:27:01.758 }' 00:27:01.758 11:09:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:01.758 11:09:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.326 11:09:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:02.584 [2024-07-25 11:09:09.568695] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:02.584 [2024-07-25 11:09:09.568737] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:02.584 [2024-07-25 11:09:09.572211] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:02.584 [2024-07-25 11:09:09.572272] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:02.584 [2024-07-25 11:09:09.572418] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:02.584 [2024-07-25 11:09:09.572444] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:27:02.584 0 00:27:02.584 11:09:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3691502 00:27:02.584 11:09:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 3691502 ']' 00:27:02.584 11:09:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 3691502 00:27:02.584 11:09:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:27:02.584 11:09:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:02.584 11:09:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3691502 00:27:02.584 11:09:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:02.584 11:09:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:02.584 11:09:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3691502' 00:27:02.584 killing process with pid 3691502 00:27:02.584 11:09:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 3691502 00:27:02.584 [2024-07-25 11:09:09.646563] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:02.584 11:09:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 3691502 00:27:03.152 [2024-07-25 11:09:10.021221] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:05.059 11:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.IHNq0g0oXG 00:27:05.059 11:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:27:05.059 11:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:27:05.059 11:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:27:05.059 11:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:27:05.059 11:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:05.059 11:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:05.059 11:09:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:05.059 00:27:05.059 real 0m9.584s 00:27:05.059 user 0m13.706s 00:27:05.059 sys 0m1.482s 00:27:05.059 11:09:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:05.059 11:09:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.059 ************************************ 00:27:05.059 END TEST raid_read_error_test 00:27:05.059 ************************************ 00:27:05.059 11:09:11 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:27:05.059 11:09:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:27:05.059 11:09:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:05.059 11:09:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:05.059 ************************************ 00:27:05.060 START TEST raid_write_error_test 00:27:05.060 ************************************ 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # local raid_level=raid1 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # local num_base_bdevs=4 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@806 -- # local error_io_type=write 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i = 1 )) 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev1 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev2 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev3 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # echo BaseBdev4 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i++ )) 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # (( i <= num_base_bdevs )) 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # local base_bdevs 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # local raid_bdev_name=raid_bdev1 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # local strip_size 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # local create_arg 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # local bdevperf_log 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # local fail_per_s 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # '[' raid1 '!=' raid1 ']' 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@818 -- # strip_size=0 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # mktemp -p /raidtest 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # bdevperf_log=/raidtest/tmp.M3JakeWSDE 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # raid_pid=3693607 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # waitforlisten 3693607 /var/tmp/spdk-raid.sock 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 3693607 ']' 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:05.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.060 11:09:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:05.060 [2024-07-25 11:09:12.031805] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:05.060 [2024-07-25 11:09:12.031929] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3693607 ] 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:05.060 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:05.060 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:05.320 [2024-07-25 11:09:12.258471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.580 [2024-07-25 11:09:12.547636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.839 [2024-07-25 11:09:12.894084] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:05.839 [2024-07-25 11:09:12.894121] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:06.098 11:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.098 11:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:27:06.098 11:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:27:06.098 11:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:06.357 BaseBdev1_malloc 00:27:06.357 11:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:06.617 true 00:27:06.617 11:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:06.876 [2024-07-25 11:09:13.791607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:06.876 [2024-07-25 11:09:13.791669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:06.876 [2024-07-25 11:09:13.791693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f980 00:27:06.876 [2024-07-25 11:09:13.791714] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:06.876 [2024-07-25 11:09:13.794424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:06.876 [2024-07-25 11:09:13.794463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:06.876 BaseBdev1 00:27:06.876 11:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:27:06.876 11:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:07.135 BaseBdev2_malloc 00:27:07.136 11:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:07.395 true 00:27:07.395 11:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:07.654 [2024-07-25 11:09:14.531314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:07.654 [2024-07-25 11:09:14.531379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.654 [2024-07-25 11:09:14.531404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040880 00:27:07.654 [2024-07-25 11:09:14.531426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.654 [2024-07-25 11:09:14.534193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.654 [2024-07-25 11:09:14.534232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:07.654 BaseBdev2 00:27:07.654 11:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:27:07.654 11:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:07.913 BaseBdev3_malloc 00:27:07.913 11:09:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:08.171 true 00:27:08.171 11:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:08.171 [2024-07-25 11:09:15.269350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:08.171 [2024-07-25 11:09:15.269413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:08.172 [2024-07-25 11:09:15.269440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:27:08.172 [2024-07-25 11:09:15.269458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:08.172 [2024-07-25 11:09:15.272244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:08.172 [2024-07-25 11:09:15.272282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:08.172 BaseBdev3 00:27:08.172 11:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@828 -- # for bdev in "${base_bdevs[@]}" 00:27:08.172 11:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:08.740 BaseBdev4_malloc 00:27:08.740 11:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:08.740 true 00:27:08.740 11:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:08.999 [2024-07-25 11:09:15.994956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:08.999 [2024-07-25 11:09:15.995015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:08.999 [2024-07-25 11:09:15.995039] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042680 00:27:08.999 [2024-07-25 11:09:15.995057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:08.999 [2024-07-25 11:09:15.997776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:08.999 [2024-07-25 11:09:15.997815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:08.999 BaseBdev4 00:27:08.999 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:09.258 [2024-07-25 11:09:16.223662] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:09.258 [2024-07-25 11:09:16.226034] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:09.258 [2024-07-25 11:09:16.226148] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:09.258 [2024-07-25 11:09:16.226233] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:09.258 [2024-07-25 11:09:16.226511] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:27:09.258 [2024-07-25 11:09:16.226532] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:09.258 [2024-07-25 11:09:16.226886] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:27:09.258 [2024-07-25 11:09:16.227192] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:27:09.258 [2024-07-25 11:09:16.227213] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:27:09.258 [2024-07-25 11:09:16.227450] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:09.258 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@836 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:09.258 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:09.258 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:09.258 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:09.258 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:09.258 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:09.258 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:09.258 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:09.258 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:09.258 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:09.259 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.259 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.518 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:09.518 "name": "raid_bdev1", 00:27:09.518 "uuid": "6dddda73-5390-45b0-b20f-a5b338d10941", 00:27:09.518 "strip_size_kb": 0, 00:27:09.518 "state": "online", 00:27:09.518 "raid_level": "raid1", 00:27:09.518 "superblock": true, 00:27:09.518 "num_base_bdevs": 4, 00:27:09.518 "num_base_bdevs_discovered": 4, 00:27:09.518 "num_base_bdevs_operational": 4, 00:27:09.518 "base_bdevs_list": [ 00:27:09.518 { 00:27:09.518 "name": "BaseBdev1", 00:27:09.518 "uuid": "cbd6a913-02d8-54ac-a882-ccbc1d69efc1", 00:27:09.518 "is_configured": true, 00:27:09.518 "data_offset": 2048, 00:27:09.518 "data_size": 63488 00:27:09.518 }, 00:27:09.518 { 00:27:09.518 "name": "BaseBdev2", 00:27:09.518 "uuid": "13f350cf-b7a3-5002-a8c3-392da4242ed1", 00:27:09.518 "is_configured": true, 00:27:09.518 "data_offset": 2048, 00:27:09.518 "data_size": 63488 00:27:09.518 }, 00:27:09.518 { 00:27:09.518 "name": "BaseBdev3", 00:27:09.518 "uuid": "b74f14f2-d21b-505d-a292-a5c2f2d2b9c9", 00:27:09.518 "is_configured": true, 00:27:09.518 "data_offset": 2048, 00:27:09.518 "data_size": 63488 00:27:09.518 }, 00:27:09.518 { 00:27:09.518 "name": "BaseBdev4", 00:27:09.518 "uuid": "5a2866a9-3e0e-5bd9-be2b-0b39b6b73e68", 00:27:09.518 "is_configured": true, 00:27:09.518 "data_offset": 2048, 00:27:09.518 "data_size": 63488 00:27:09.518 } 00:27:09.518 ] 00:27:09.518 }' 00:27:09.518 11:09:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:09.518 11:09:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.086 11:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@840 -- # sleep 1 00:27:10.086 11:09:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:10.086 [2024-07-25 11:09:17.143969] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:27:11.021 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:11.280 [2024-07-25 11:09:18.256310] bdev_raid.c:2263:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:27:11.280 [2024-07-25 11:09:18.256381] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:11.280 [2024-07-25 11:09:18.256634] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:27:11.280 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # local expected_num_base_bdevs 00:27:11.280 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:11.280 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # [[ write = \w\r\i\t\e ]] 00:27:11.280 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # expected_num_base_bdevs=3 00:27:11.281 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@851 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:11.281 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:11.281 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:11.281 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:11.281 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:11.281 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:11.281 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:11.281 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:11.281 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:11.281 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:11.281 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.281 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.540 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:11.540 "name": "raid_bdev1", 00:27:11.540 "uuid": "6dddda73-5390-45b0-b20f-a5b338d10941", 00:27:11.540 "strip_size_kb": 0, 00:27:11.540 "state": "online", 00:27:11.540 "raid_level": "raid1", 00:27:11.540 "superblock": true, 00:27:11.540 "num_base_bdevs": 4, 00:27:11.540 "num_base_bdevs_discovered": 3, 00:27:11.540 "num_base_bdevs_operational": 3, 00:27:11.540 "base_bdevs_list": [ 00:27:11.540 { 00:27:11.540 "name": null, 00:27:11.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:11.540 "is_configured": false, 00:27:11.540 "data_offset": 2048, 00:27:11.540 "data_size": 63488 00:27:11.540 }, 00:27:11.540 { 00:27:11.540 "name": "BaseBdev2", 00:27:11.540 "uuid": "13f350cf-b7a3-5002-a8c3-392da4242ed1", 00:27:11.540 "is_configured": true, 00:27:11.540 "data_offset": 2048, 00:27:11.540 "data_size": 63488 00:27:11.540 }, 00:27:11.540 { 00:27:11.540 "name": "BaseBdev3", 00:27:11.540 "uuid": "b74f14f2-d21b-505d-a292-a5c2f2d2b9c9", 00:27:11.540 "is_configured": true, 00:27:11.540 "data_offset": 2048, 00:27:11.540 "data_size": 63488 00:27:11.540 }, 00:27:11.540 { 00:27:11.540 "name": "BaseBdev4", 00:27:11.540 "uuid": "5a2866a9-3e0e-5bd9-be2b-0b39b6b73e68", 00:27:11.540 "is_configured": true, 00:27:11.540 "data_offset": 2048, 00:27:11.540 "data_size": 63488 00:27:11.540 } 00:27:11.540 ] 00:27:11.540 }' 00:27:11.540 11:09:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:11.540 11:09:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.108 11:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@853 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:12.367 [2024-07-25 11:09:19.302870] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:12.367 [2024-07-25 11:09:19.302911] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:12.367 [2024-07-25 11:09:19.306403] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:12.367 [2024-07-25 11:09:19.306460] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.367 [2024-07-25 11:09:19.306594] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:12.367 [2024-07-25 11:09:19.306611] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:27:12.367 0 00:27:12.367 11:09:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@855 -- # killprocess 3693607 00:27:12.367 11:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 3693607 ']' 00:27:12.367 11:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 3693607 00:27:12.367 11:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:27:12.367 11:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:12.367 11:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3693607 00:27:12.367 11:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:12.367 11:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:12.367 11:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3693607' 00:27:12.367 killing process with pid 3693607 00:27:12.367 11:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 3693607 00:27:12.367 [2024-07-25 11:09:19.380442] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:12.367 11:09:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 3693607 00:27:12.625 [2024-07-25 11:09:19.732990] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:14.570 11:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep -v Job /raidtest/tmp.M3JakeWSDE 00:27:14.570 11:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # grep raid_bdev1 00:27:14.570 11:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # awk '{print $6}' 00:27:14.570 11:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@859 -- # fail_per_s=0.00 00:27:14.570 11:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@860 -- # has_redundancy raid1 00:27:14.570 11:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:14.570 11:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:14.570 11:09:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@861 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:14.570 00:27:14.570 real 0m9.582s 00:27:14.570 user 0m13.744s 00:27:14.570 sys 0m1.463s 00:27:14.570 11:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:14.570 11:09:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.570 ************************************ 00:27:14.570 END TEST raid_write_error_test 00:27:14.570 ************************************ 00:27:14.570 11:09:21 bdev_raid -- bdev/bdev_raid.sh@955 -- # '[' true = true ']' 00:27:14.570 11:09:21 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:27:14.570 11:09:21 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:27:14.570 11:09:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:14.570 11:09:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:14.570 11:09:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:14.570 ************************************ 00:27:14.570 START TEST raid_rebuild_test 00:27:14.570 ************************************ 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=3695285 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 3695285 /var/tmp/spdk-raid.sock 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 3695285 ']' 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:14.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:14.570 11:09:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:14.830 [2024-07-25 11:09:21.691634] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:14.830 [2024-07-25 11:09:21.691757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3695285 ] 00:27:14.830 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:14.830 Zero copy mechanism will not be used. 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:14.830 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:14.830 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:14.831 [2024-07-25 11:09:21.916661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.091 [2024-07-25 11:09:22.178742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.660 [2024-07-25 11:09:22.515593] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:15.660 [2024-07-25 11:09:22.515629] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:15.660 11:09:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:15.660 11:09:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:27:15.660 11:09:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:15.660 11:09:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:15.918 BaseBdev1_malloc 00:27:15.918 11:09:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:16.179 [2024-07-25 11:09:23.182480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:16.179 [2024-07-25 11:09:23.182545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.179 [2024-07-25 11:09:23.182575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:27:16.179 [2024-07-25 11:09:23.182597] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.179 [2024-07-25 11:09:23.185361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.179 [2024-07-25 11:09:23.185402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:16.179 BaseBdev1 00:27:16.179 11:09:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:16.179 11:09:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:16.438 BaseBdev2_malloc 00:27:16.438 11:09:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:16.698 [2024-07-25 11:09:23.688724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:16.698 [2024-07-25 11:09:23.688787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.698 [2024-07-25 11:09:23.688813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:27:16.698 [2024-07-25 11:09:23.688839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.698 [2024-07-25 11:09:23.691590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.698 [2024-07-25 11:09:23.691628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:16.698 BaseBdev2 00:27:16.698 11:09:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:16.957 spare_malloc 00:27:16.957 11:09:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:17.216 spare_delay 00:27:17.216 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:17.475 [2024-07-25 11:09:24.414433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:17.475 [2024-07-25 11:09:24.414493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:17.475 [2024-07-25 11:09:24.414521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:27:17.475 [2024-07-25 11:09:24.414539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:17.475 [2024-07-25 11:09:24.417312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:17.475 [2024-07-25 11:09:24.417364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:17.475 spare 00:27:17.475 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:17.735 [2024-07-25 11:09:24.627011] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:17.735 [2024-07-25 11:09:24.629333] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:17.735 [2024-07-25 11:09:24.629435] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:17.735 [2024-07-25 11:09:24.629454] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:17.735 [2024-07-25 11:09:24.629819] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:27:17.735 [2024-07-25 11:09:24.630063] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:17.735 [2024-07-25 11:09:24.630081] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:17.735 [2024-07-25 11:09:24.630326] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:17.735 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:17.735 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:17.735 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:17.735 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:17.735 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:17.735 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:17.735 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:17.735 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:17.735 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:17.735 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:17.735 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.735 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.994 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:17.994 "name": "raid_bdev1", 00:27:17.994 "uuid": "5f2813a2-1417-483d-add5-be1a29818e49", 00:27:17.994 "strip_size_kb": 0, 00:27:17.994 "state": "online", 00:27:17.994 "raid_level": "raid1", 00:27:17.994 "superblock": false, 00:27:17.994 "num_base_bdevs": 2, 00:27:17.994 "num_base_bdevs_discovered": 2, 00:27:17.994 "num_base_bdevs_operational": 2, 00:27:17.994 "base_bdevs_list": [ 00:27:17.994 { 00:27:17.994 "name": "BaseBdev1", 00:27:17.994 "uuid": "5443fcd9-4f19-55e5-a2f6-4f3ae35b6c79", 00:27:17.994 "is_configured": true, 00:27:17.994 "data_offset": 0, 00:27:17.994 "data_size": 65536 00:27:17.994 }, 00:27:17.994 { 00:27:17.994 "name": "BaseBdev2", 00:27:17.994 "uuid": "6c06a8b1-d4ad-5040-a7df-7f13a5d2474f", 00:27:17.994 "is_configured": true, 00:27:17.994 "data_offset": 0, 00:27:17.994 "data_size": 65536 00:27:17.994 } 00:27:17.994 ] 00:27:17.994 }' 00:27:17.994 11:09:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:17.994 11:09:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.562 11:09:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:18.562 11:09:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:18.562 [2024-07-25 11:09:25.642066] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:18.562 11:09:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:27:18.562 11:09:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.562 11:09:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:18.821 11:09:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:19.080 [2024-07-25 11:09:26.094998] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:27:19.080 /dev/nbd0 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:19.080 1+0 records in 00:27:19.080 1+0 records out 00:27:19.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247121 s, 16.6 MB/s 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:27:19.080 11:09:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:27:24.381 65536+0 records in 00:27:24.381 65536+0 records out 00:27:24.381 33554432 bytes (34 MB, 32 MiB) copied, 4.70866 s, 7.1 MB/s 00:27:24.381 11:09:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:24.381 11:09:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:24.381 11:09:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:24.381 11:09:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:24.381 11:09:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:24.381 11:09:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:24.381 11:09:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:24.382 [2024-07-25 11:09:31.121371] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:24.382 [2024-07-25 11:09:31.338081] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:24.382 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:24.641 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:24.641 "name": "raid_bdev1", 00:27:24.641 "uuid": "5f2813a2-1417-483d-add5-be1a29818e49", 00:27:24.641 "strip_size_kb": 0, 00:27:24.641 "state": "online", 00:27:24.641 "raid_level": "raid1", 00:27:24.641 "superblock": false, 00:27:24.641 "num_base_bdevs": 2, 00:27:24.641 "num_base_bdevs_discovered": 1, 00:27:24.641 "num_base_bdevs_operational": 1, 00:27:24.641 "base_bdevs_list": [ 00:27:24.641 { 00:27:24.641 "name": null, 00:27:24.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:24.641 "is_configured": false, 00:27:24.641 "data_offset": 0, 00:27:24.641 "data_size": 65536 00:27:24.641 }, 00:27:24.641 { 00:27:24.641 "name": "BaseBdev2", 00:27:24.641 "uuid": "6c06a8b1-d4ad-5040-a7df-7f13a5d2474f", 00:27:24.641 "is_configured": true, 00:27:24.641 "data_offset": 0, 00:27:24.641 "data_size": 65536 00:27:24.641 } 00:27:24.641 ] 00:27:24.641 }' 00:27:24.641 11:09:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:24.641 11:09:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.209 11:09:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:25.209 [2024-07-25 11:09:32.324754] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:25.467 [2024-07-25 11:09:32.348627] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d14400 00:27:25.467 [2024-07-25 11:09:32.350923] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:25.467 11:09:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:26.414 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:26.414 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:26.414 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:26.414 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:26.414 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:26.414 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.414 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.674 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:26.674 "name": "raid_bdev1", 00:27:26.674 "uuid": "5f2813a2-1417-483d-add5-be1a29818e49", 00:27:26.674 "strip_size_kb": 0, 00:27:26.674 "state": "online", 00:27:26.674 "raid_level": "raid1", 00:27:26.674 "superblock": false, 00:27:26.674 "num_base_bdevs": 2, 00:27:26.674 "num_base_bdevs_discovered": 2, 00:27:26.674 "num_base_bdevs_operational": 2, 00:27:26.674 "process": { 00:27:26.674 "type": "rebuild", 00:27:26.674 "target": "spare", 00:27:26.674 "progress": { 00:27:26.674 "blocks": 24576, 00:27:26.674 "percent": 37 00:27:26.674 } 00:27:26.674 }, 00:27:26.674 "base_bdevs_list": [ 00:27:26.674 { 00:27:26.674 "name": "spare", 00:27:26.674 "uuid": "72ebfb6b-23fe-5ef5-a978-0363a6fbc111", 00:27:26.674 "is_configured": true, 00:27:26.674 "data_offset": 0, 00:27:26.674 "data_size": 65536 00:27:26.674 }, 00:27:26.674 { 00:27:26.674 "name": "BaseBdev2", 00:27:26.674 "uuid": "6c06a8b1-d4ad-5040-a7df-7f13a5d2474f", 00:27:26.674 "is_configured": true, 00:27:26.674 "data_offset": 0, 00:27:26.674 "data_size": 65536 00:27:26.674 } 00:27:26.674 ] 00:27:26.674 }' 00:27:26.674 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:26.674 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:26.674 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:26.674 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:26.674 11:09:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:26.933 [2024-07-25 11:09:33.903880] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.933 [2024-07-25 11:09:33.963890] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:26.933 [2024-07-25 11:09:33.963951] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.934 [2024-07-25 11:09:33.963972] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.934 [2024-07-25 11:09:33.963995] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:26.934 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:26.934 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:26.934 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:26.934 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:26.934 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:26.934 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:26.934 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:26.934 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:26.934 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:26.934 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:26.934 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.934 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:27.192 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:27.192 "name": "raid_bdev1", 00:27:27.192 "uuid": "5f2813a2-1417-483d-add5-be1a29818e49", 00:27:27.192 "strip_size_kb": 0, 00:27:27.192 "state": "online", 00:27:27.192 "raid_level": "raid1", 00:27:27.192 "superblock": false, 00:27:27.192 "num_base_bdevs": 2, 00:27:27.192 "num_base_bdevs_discovered": 1, 00:27:27.192 "num_base_bdevs_operational": 1, 00:27:27.193 "base_bdevs_list": [ 00:27:27.193 { 00:27:27.193 "name": null, 00:27:27.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:27.193 "is_configured": false, 00:27:27.193 "data_offset": 0, 00:27:27.193 "data_size": 65536 00:27:27.193 }, 00:27:27.193 { 00:27:27.193 "name": "BaseBdev2", 00:27:27.193 "uuid": "6c06a8b1-d4ad-5040-a7df-7f13a5d2474f", 00:27:27.193 "is_configured": true, 00:27:27.193 "data_offset": 0, 00:27:27.193 "data_size": 65536 00:27:27.193 } 00:27:27.193 ] 00:27:27.193 }' 00:27:27.193 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:27.193 11:09:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.760 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:27.760 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:27.760 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:27.760 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:27.760 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:27.760 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.760 11:09:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.019 11:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.019 "name": "raid_bdev1", 00:27:28.019 "uuid": "5f2813a2-1417-483d-add5-be1a29818e49", 00:27:28.019 "strip_size_kb": 0, 00:27:28.019 "state": "online", 00:27:28.019 "raid_level": "raid1", 00:27:28.019 "superblock": false, 00:27:28.019 "num_base_bdevs": 2, 00:27:28.019 "num_base_bdevs_discovered": 1, 00:27:28.019 "num_base_bdevs_operational": 1, 00:27:28.019 "base_bdevs_list": [ 00:27:28.019 { 00:27:28.019 "name": null, 00:27:28.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.019 "is_configured": false, 00:27:28.019 "data_offset": 0, 00:27:28.019 "data_size": 65536 00:27:28.019 }, 00:27:28.019 { 00:27:28.019 "name": "BaseBdev2", 00:27:28.019 "uuid": "6c06a8b1-d4ad-5040-a7df-7f13a5d2474f", 00:27:28.019 "is_configured": true, 00:27:28.019 "data_offset": 0, 00:27:28.019 "data_size": 65536 00:27:28.019 } 00:27:28.019 ] 00:27:28.019 }' 00:27:28.019 11:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:28.019 11:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:28.019 11:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:28.019 11:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:28.019 11:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:28.278 [2024-07-25 11:09:35.322684] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:28.278 [2024-07-25 11:09:35.345774] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d144d0 00:27:28.278 [2024-07-25 11:09:35.348085] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:28.278 11:09:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:29.654 "name": "raid_bdev1", 00:27:29.654 "uuid": "5f2813a2-1417-483d-add5-be1a29818e49", 00:27:29.654 "strip_size_kb": 0, 00:27:29.654 "state": "online", 00:27:29.654 "raid_level": "raid1", 00:27:29.654 "superblock": false, 00:27:29.654 "num_base_bdevs": 2, 00:27:29.654 "num_base_bdevs_discovered": 2, 00:27:29.654 "num_base_bdevs_operational": 2, 00:27:29.654 "process": { 00:27:29.654 "type": "rebuild", 00:27:29.654 "target": "spare", 00:27:29.654 "progress": { 00:27:29.654 "blocks": 24576, 00:27:29.654 "percent": 37 00:27:29.654 } 00:27:29.654 }, 00:27:29.654 "base_bdevs_list": [ 00:27:29.654 { 00:27:29.654 "name": "spare", 00:27:29.654 "uuid": "72ebfb6b-23fe-5ef5-a978-0363a6fbc111", 00:27:29.654 "is_configured": true, 00:27:29.654 "data_offset": 0, 00:27:29.654 "data_size": 65536 00:27:29.654 }, 00:27:29.654 { 00:27:29.654 "name": "BaseBdev2", 00:27:29.654 "uuid": "6c06a8b1-d4ad-5040-a7df-7f13a5d2474f", 00:27:29.654 "is_configured": true, 00:27:29.654 "data_offset": 0, 00:27:29.654 "data_size": 65536 00:27:29.654 } 00:27:29.654 ] 00:27:29.654 }' 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=860 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.654 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.912 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:29.912 "name": "raid_bdev1", 00:27:29.912 "uuid": "5f2813a2-1417-483d-add5-be1a29818e49", 00:27:29.912 "strip_size_kb": 0, 00:27:29.912 "state": "online", 00:27:29.912 "raid_level": "raid1", 00:27:29.912 "superblock": false, 00:27:29.912 "num_base_bdevs": 2, 00:27:29.912 "num_base_bdevs_discovered": 2, 00:27:29.912 "num_base_bdevs_operational": 2, 00:27:29.912 "process": { 00:27:29.912 "type": "rebuild", 00:27:29.912 "target": "spare", 00:27:29.912 "progress": { 00:27:29.912 "blocks": 30720, 00:27:29.912 "percent": 46 00:27:29.912 } 00:27:29.912 }, 00:27:29.912 "base_bdevs_list": [ 00:27:29.912 { 00:27:29.912 "name": "spare", 00:27:29.912 "uuid": "72ebfb6b-23fe-5ef5-a978-0363a6fbc111", 00:27:29.912 "is_configured": true, 00:27:29.912 "data_offset": 0, 00:27:29.912 "data_size": 65536 00:27:29.912 }, 00:27:29.912 { 00:27:29.912 "name": "BaseBdev2", 00:27:29.912 "uuid": "6c06a8b1-d4ad-5040-a7df-7f13a5d2474f", 00:27:29.912 "is_configured": true, 00:27:29.912 "data_offset": 0, 00:27:29.912 "data_size": 65536 00:27:29.912 } 00:27:29.912 ] 00:27:29.912 }' 00:27:29.912 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:29.912 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:29.912 11:09:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:29.913 11:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:29.913 11:09:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:31.289 "name": "raid_bdev1", 00:27:31.289 "uuid": "5f2813a2-1417-483d-add5-be1a29818e49", 00:27:31.289 "strip_size_kb": 0, 00:27:31.289 "state": "online", 00:27:31.289 "raid_level": "raid1", 00:27:31.289 "superblock": false, 00:27:31.289 "num_base_bdevs": 2, 00:27:31.289 "num_base_bdevs_discovered": 2, 00:27:31.289 "num_base_bdevs_operational": 2, 00:27:31.289 "process": { 00:27:31.289 "type": "rebuild", 00:27:31.289 "target": "spare", 00:27:31.289 "progress": { 00:27:31.289 "blocks": 57344, 00:27:31.289 "percent": 87 00:27:31.289 } 00:27:31.289 }, 00:27:31.289 "base_bdevs_list": [ 00:27:31.289 { 00:27:31.289 "name": "spare", 00:27:31.289 "uuid": "72ebfb6b-23fe-5ef5-a978-0363a6fbc111", 00:27:31.289 "is_configured": true, 00:27:31.289 "data_offset": 0, 00:27:31.289 "data_size": 65536 00:27:31.289 }, 00:27:31.289 { 00:27:31.289 "name": "BaseBdev2", 00:27:31.289 "uuid": "6c06a8b1-d4ad-5040-a7df-7f13a5d2474f", 00:27:31.289 "is_configured": true, 00:27:31.289 "data_offset": 0, 00:27:31.289 "data_size": 65536 00:27:31.289 } 00:27:31.289 ] 00:27:31.289 }' 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:31.289 11:09:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:31.548 [2024-07-25 11:09:38.573695] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:31.548 [2024-07-25 11:09:38.573769] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:31.548 [2024-07-25 11:09:38.573822] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:32.543 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:32.543 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:32.543 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:32.543 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:32.543 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:32.543 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:32.543 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.543 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.543 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:32.543 "name": "raid_bdev1", 00:27:32.543 "uuid": "5f2813a2-1417-483d-add5-be1a29818e49", 00:27:32.543 "strip_size_kb": 0, 00:27:32.543 "state": "online", 00:27:32.543 "raid_level": "raid1", 00:27:32.543 "superblock": false, 00:27:32.543 "num_base_bdevs": 2, 00:27:32.543 "num_base_bdevs_discovered": 2, 00:27:32.543 "num_base_bdevs_operational": 2, 00:27:32.543 "base_bdevs_list": [ 00:27:32.543 { 00:27:32.543 "name": "spare", 00:27:32.543 "uuid": "72ebfb6b-23fe-5ef5-a978-0363a6fbc111", 00:27:32.543 "is_configured": true, 00:27:32.543 "data_offset": 0, 00:27:32.543 "data_size": 65536 00:27:32.543 }, 00:27:32.543 { 00:27:32.543 "name": "BaseBdev2", 00:27:32.543 "uuid": "6c06a8b1-d4ad-5040-a7df-7f13a5d2474f", 00:27:32.543 "is_configured": true, 00:27:32.543 "data_offset": 0, 00:27:32.543 "data_size": 65536 00:27:32.543 } 00:27:32.543 ] 00:27:32.543 }' 00:27:32.543 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:32.543 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:32.543 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:32.803 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:32.803 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:27:32.803 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:32.803 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:32.803 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:32.803 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:32.803 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:32.803 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.803 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.803 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:32.803 "name": "raid_bdev1", 00:27:32.803 "uuid": "5f2813a2-1417-483d-add5-be1a29818e49", 00:27:32.803 "strip_size_kb": 0, 00:27:32.803 "state": "online", 00:27:32.803 "raid_level": "raid1", 00:27:32.803 "superblock": false, 00:27:32.803 "num_base_bdevs": 2, 00:27:32.803 "num_base_bdevs_discovered": 2, 00:27:32.803 "num_base_bdevs_operational": 2, 00:27:32.803 "base_bdevs_list": [ 00:27:32.803 { 00:27:32.803 "name": "spare", 00:27:32.803 "uuid": "72ebfb6b-23fe-5ef5-a978-0363a6fbc111", 00:27:32.803 "is_configured": true, 00:27:32.803 "data_offset": 0, 00:27:32.803 "data_size": 65536 00:27:32.803 }, 00:27:32.803 { 00:27:32.803 "name": "BaseBdev2", 00:27:32.803 "uuid": "6c06a8b1-d4ad-5040-a7df-7f13a5d2474f", 00:27:32.803 "is_configured": true, 00:27:32.803 "data_offset": 0, 00:27:32.803 "data_size": 65536 00:27:32.803 } 00:27:32.803 ] 00:27:32.803 }' 00:27:32.803 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:33.062 11:09:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.321 11:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:33.321 "name": "raid_bdev1", 00:27:33.321 "uuid": "5f2813a2-1417-483d-add5-be1a29818e49", 00:27:33.321 "strip_size_kb": 0, 00:27:33.321 "state": "online", 00:27:33.321 "raid_level": "raid1", 00:27:33.321 "superblock": false, 00:27:33.321 "num_base_bdevs": 2, 00:27:33.321 "num_base_bdevs_discovered": 2, 00:27:33.321 "num_base_bdevs_operational": 2, 00:27:33.321 "base_bdevs_list": [ 00:27:33.321 { 00:27:33.321 "name": "spare", 00:27:33.321 "uuid": "72ebfb6b-23fe-5ef5-a978-0363a6fbc111", 00:27:33.321 "is_configured": true, 00:27:33.321 "data_offset": 0, 00:27:33.321 "data_size": 65536 00:27:33.321 }, 00:27:33.321 { 00:27:33.321 "name": "BaseBdev2", 00:27:33.321 "uuid": "6c06a8b1-d4ad-5040-a7df-7f13a5d2474f", 00:27:33.321 "is_configured": true, 00:27:33.321 "data_offset": 0, 00:27:33.321 "data_size": 65536 00:27:33.321 } 00:27:33.321 ] 00:27:33.321 }' 00:27:33.321 11:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:33.321 11:09:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.890 11:09:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:33.890 [2024-07-25 11:09:41.001631] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:33.890 [2024-07-25 11:09:41.001668] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:33.890 [2024-07-25 11:09:41.001750] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:33.890 [2024-07-25 11:09:41.001830] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:33.890 [2024-07-25 11:09:41.001847] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:34.149 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:34.409 /dev/nbd0 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:34.409 1+0 records in 00:27:34.409 1+0 records out 00:27:34.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269499 s, 15.2 MB/s 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:34.409 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:34.668 /dev/nbd1 00:27:34.668 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:34.668 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:34.668 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:34.668 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:27:34.668 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:34.668 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:34.668 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:34.668 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:27:34.668 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:34.927 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:34.927 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:34.927 1+0 records in 00:27:34.927 1+0 records out 00:27:34.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344661 s, 11.9 MB/s 00:27:34.927 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:34.927 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:27:34.927 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:34.927 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:34.927 11:09:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:27:34.927 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:34.927 11:09:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:34.927 11:09:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:34.927 11:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:34.927 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:34.927 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:34.927 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:34.927 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:34.927 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:34.927 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:35.186 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:35.186 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:35.186 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:35.186 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:35.186 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:35.186 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:35.186 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:35.186 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:35.186 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:35.186 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:35.445 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:35.445 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:35.445 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:35.445 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:35.445 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:35.445 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:35.445 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:35.445 11:09:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:35.446 11:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:27:35.446 11:09:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 3695285 00:27:35.446 11:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 3695285 ']' 00:27:35.446 11:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 3695285 00:27:35.446 11:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:27:35.446 11:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:35.446 11:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3695285 00:27:35.705 11:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:35.705 11:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:35.705 11:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3695285' 00:27:35.705 killing process with pid 3695285 00:27:35.705 11:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 3695285 00:27:35.705 Received shutdown signal, test time was about 60.000000 seconds 00:27:35.705 00:27:35.705 Latency(us) 00:27:35.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.705 =================================================================================================================== 00:27:35.705 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:35.705 [2024-07-25 11:09:42.572235] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:35.705 11:09:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 3695285 00:27:35.965 [2024-07-25 11:09:42.901748] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:27:37.867 00:27:37.867 real 0m23.007s 00:27:37.867 user 0m30.257s 00:27:37.867 sys 0m4.575s 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:37.867 ************************************ 00:27:37.867 END TEST raid_rebuild_test 00:27:37.867 ************************************ 00:27:37.867 11:09:44 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:27:37.867 11:09:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:27:37.867 11:09:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:37.867 11:09:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:37.867 ************************************ 00:27:37.867 START TEST raid_rebuild_test_sb 00:27:37.867 ************************************ 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=3699242 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 3699242 /var/tmp/spdk-raid.sock 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3699242 ']' 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:37.867 11:09:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:37.868 11:09:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:37.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:37.868 11:09:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:37.868 11:09:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.868 [2024-07-25 11:09:44.784727] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:37.868 [2024-07-25 11:09:44.784845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3699242 ] 00:27:37.868 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:37.868 Zero copy mechanism will not be used. 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:01.0 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:01.1 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:01.2 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:01.3 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:01.4 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:01.5 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:01.6 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:01.7 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:02.0 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:02.1 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:02.2 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:02.3 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:02.4 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:02.5 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:02.6 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3d:02.7 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:01.0 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:01.1 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:01.2 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:01.3 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:01.4 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:01.5 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:01.6 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:01.7 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:02.0 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:02.1 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:02.2 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:02.3 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:02.4 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:02.5 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:02.6 cannot be used 00:27:37.868 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:27:37.868 EAL: Requested device 0000:3f:02.7 cannot be used 00:27:38.128 [2024-07-25 11:09:45.013502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.386 [2024-07-25 11:09:45.295335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.645 [2024-07-25 11:09:45.638565] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:38.645 [2024-07-25 11:09:45.638620] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:38.903 11:09:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:38.903 11:09:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:27:38.904 11:09:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:38.904 11:09:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:39.162 BaseBdev1_malloc 00:27:39.162 11:09:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:39.420 [2024-07-25 11:09:46.284090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:39.420 [2024-07-25 11:09:46.284165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.421 [2024-07-25 11:09:46.284196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:27:39.421 [2024-07-25 11:09:46.284215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.421 [2024-07-25 11:09:46.286939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.421 [2024-07-25 11:09:46.286976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:39.421 BaseBdev1 00:27:39.421 11:09:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:27:39.421 11:09:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:39.679 BaseBdev2_malloc 00:27:39.679 11:09:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:39.679 [2024-07-25 11:09:46.788924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:39.679 [2024-07-25 11:09:46.788986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.679 [2024-07-25 11:09:46.789013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:27:39.679 [2024-07-25 11:09:46.789042] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.679 [2024-07-25 11:09:46.791761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.679 [2024-07-25 11:09:46.791797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:39.679 BaseBdev2 00:27:39.938 11:09:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:40.198 spare_malloc 00:27:40.198 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:40.198 spare_delay 00:27:40.198 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:40.457 [2024-07-25 11:09:47.503669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:40.457 [2024-07-25 11:09:47.503721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:40.457 [2024-07-25 11:09:47.503745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:27:40.457 [2024-07-25 11:09:47.503763] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:40.457 [2024-07-25 11:09:47.506467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:40.457 [2024-07-25 11:09:47.506510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:40.457 spare 00:27:40.457 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:40.715 [2024-07-25 11:09:47.720288] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:40.715 [2024-07-25 11:09:47.722560] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:40.715 [2024-07-25 11:09:47.722736] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:40.715 [2024-07-25 11:09:47.722756] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:40.715 [2024-07-25 11:09:47.723101] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:27:40.715 [2024-07-25 11:09:47.723347] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:40.715 [2024-07-25 11:09:47.723369] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:40.715 [2024-07-25 11:09:47.723561] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:40.715 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:40.715 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:40.715 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:40.715 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:40.715 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:40.715 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:40.716 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:40.716 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:40.716 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:40.716 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:40.716 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.716 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.998 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:40.998 "name": "raid_bdev1", 00:27:40.998 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:27:40.998 "strip_size_kb": 0, 00:27:40.998 "state": "online", 00:27:40.998 "raid_level": "raid1", 00:27:40.998 "superblock": true, 00:27:40.998 "num_base_bdevs": 2, 00:27:40.998 "num_base_bdevs_discovered": 2, 00:27:40.998 "num_base_bdevs_operational": 2, 00:27:40.998 "base_bdevs_list": [ 00:27:40.998 { 00:27:40.999 "name": "BaseBdev1", 00:27:40.999 "uuid": "180427b3-18cc-5ae9-a908-bd43137592d7", 00:27:40.999 "is_configured": true, 00:27:40.999 "data_offset": 2048, 00:27:40.999 "data_size": 63488 00:27:40.999 }, 00:27:40.999 { 00:27:40.999 "name": "BaseBdev2", 00:27:40.999 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:27:40.999 "is_configured": true, 00:27:40.999 "data_offset": 2048, 00:27:40.999 "data_size": 63488 00:27:40.999 } 00:27:40.999 ] 00:27:40.999 }' 00:27:40.999 11:09:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:40.999 11:09:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.566 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:41.566 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:27:41.826 [2024-07-25 11:09:48.723389] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:41.826 11:09:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:42.085 [2024-07-25 11:09:49.128202] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:27:42.085 /dev/nbd0 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:42.085 1+0 records in 00:27:42.085 1+0 records out 00:27:42.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165048 s, 24.8 MB/s 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:27:42.085 11:09:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:27:48.655 63488+0 records in 00:27:48.655 63488+0 records out 00:27:48.655 32505856 bytes (33 MB, 31 MiB) copied, 5.98625 s, 5.4 MB/s 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:48.655 [2024-07-25 11:09:55.417743] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:48.655 [2024-07-25 11:09:55.630430] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:48.655 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.914 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:48.914 "name": "raid_bdev1", 00:27:48.914 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:27:48.914 "strip_size_kb": 0, 00:27:48.914 "state": "online", 00:27:48.914 "raid_level": "raid1", 00:27:48.914 "superblock": true, 00:27:48.914 "num_base_bdevs": 2, 00:27:48.914 "num_base_bdevs_discovered": 1, 00:27:48.914 "num_base_bdevs_operational": 1, 00:27:48.914 "base_bdevs_list": [ 00:27:48.914 { 00:27:48.914 "name": null, 00:27:48.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:48.914 "is_configured": false, 00:27:48.914 "data_offset": 2048, 00:27:48.914 "data_size": 63488 00:27:48.914 }, 00:27:48.914 { 00:27:48.914 "name": "BaseBdev2", 00:27:48.914 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:27:48.914 "is_configured": true, 00:27:48.914 "data_offset": 2048, 00:27:48.914 "data_size": 63488 00:27:48.914 } 00:27:48.914 ] 00:27:48.914 }' 00:27:48.914 11:09:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:48.914 11:09:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:49.483 11:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:49.742 [2024-07-25 11:09:56.677276] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:49.742 [2024-07-25 11:09:56.704294] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caaba0 00:27:49.742 [2024-07-25 11:09:56.706638] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:49.742 11:09:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:50.743 11:09:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:50.743 11:09:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:50.743 11:09:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:50.743 11:09:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:50.743 11:09:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:50.743 11:09:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.743 11:09:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.002 11:09:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:51.002 "name": "raid_bdev1", 00:27:51.002 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:27:51.002 "strip_size_kb": 0, 00:27:51.002 "state": "online", 00:27:51.002 "raid_level": "raid1", 00:27:51.002 "superblock": true, 00:27:51.002 "num_base_bdevs": 2, 00:27:51.002 "num_base_bdevs_discovered": 2, 00:27:51.002 "num_base_bdevs_operational": 2, 00:27:51.002 "process": { 00:27:51.002 "type": "rebuild", 00:27:51.002 "target": "spare", 00:27:51.002 "progress": { 00:27:51.002 "blocks": 24576, 00:27:51.002 "percent": 38 00:27:51.002 } 00:27:51.002 }, 00:27:51.002 "base_bdevs_list": [ 00:27:51.002 { 00:27:51.002 "name": "spare", 00:27:51.002 "uuid": "83c72fec-c2bf-538b-8663-ee1494bc4a26", 00:27:51.002 "is_configured": true, 00:27:51.002 "data_offset": 2048, 00:27:51.002 "data_size": 63488 00:27:51.002 }, 00:27:51.002 { 00:27:51.002 "name": "BaseBdev2", 00:27:51.002 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:27:51.002 "is_configured": true, 00:27:51.002 "data_offset": 2048, 00:27:51.002 "data_size": 63488 00:27:51.002 } 00:27:51.002 ] 00:27:51.002 }' 00:27:51.002 11:09:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:51.002 11:09:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:51.002 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:51.002 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:51.002 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:51.262 [2024-07-25 11:09:58.195817] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:51.262 [2024-07-25 11:09:58.218741] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:51.262 [2024-07-25 11:09:58.218816] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:51.262 [2024-07-25 11:09:58.218839] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:51.262 [2024-07-25 11:09:58.218855] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:51.262 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:51.262 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:51.262 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:51.262 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:51.262 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:51.262 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:51.262 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:51.262 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:51.262 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:51.262 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:51.262 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:51.262 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:51.521 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:51.521 "name": "raid_bdev1", 00:27:51.521 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:27:51.521 "strip_size_kb": 0, 00:27:51.521 "state": "online", 00:27:51.521 "raid_level": "raid1", 00:27:51.521 "superblock": true, 00:27:51.521 "num_base_bdevs": 2, 00:27:51.521 "num_base_bdevs_discovered": 1, 00:27:51.521 "num_base_bdevs_operational": 1, 00:27:51.521 "base_bdevs_list": [ 00:27:51.521 { 00:27:51.521 "name": null, 00:27:51.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:51.521 "is_configured": false, 00:27:51.521 "data_offset": 2048, 00:27:51.521 "data_size": 63488 00:27:51.521 }, 00:27:51.521 { 00:27:51.521 "name": "BaseBdev2", 00:27:51.521 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:27:51.521 "is_configured": true, 00:27:51.521 "data_offset": 2048, 00:27:51.521 "data_size": 63488 00:27:51.521 } 00:27:51.521 ] 00:27:51.521 }' 00:27:51.521 11:09:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:51.521 11:09:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:52.089 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:52.089 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:52.089 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:52.089 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:52.089 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:52.089 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.089 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.089 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:52.089 "name": "raid_bdev1", 00:27:52.089 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:27:52.089 "strip_size_kb": 0, 00:27:52.089 "state": "online", 00:27:52.089 "raid_level": "raid1", 00:27:52.089 "superblock": true, 00:27:52.089 "num_base_bdevs": 2, 00:27:52.089 "num_base_bdevs_discovered": 1, 00:27:52.089 "num_base_bdevs_operational": 1, 00:27:52.089 "base_bdevs_list": [ 00:27:52.089 { 00:27:52.089 "name": null, 00:27:52.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:52.089 "is_configured": false, 00:27:52.089 "data_offset": 2048, 00:27:52.089 "data_size": 63488 00:27:52.089 }, 00:27:52.089 { 00:27:52.089 "name": "BaseBdev2", 00:27:52.089 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:27:52.089 "is_configured": true, 00:27:52.089 "data_offset": 2048, 00:27:52.089 "data_size": 63488 00:27:52.089 } 00:27:52.089 ] 00:27:52.089 }' 00:27:52.089 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:52.347 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:52.347 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:52.347 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:52.347 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:52.606 [2024-07-25 11:09:59.474082] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:52.606 [2024-07-25 11:09:59.499662] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caac70 00:27:52.606 [2024-07-25 11:09:59.501984] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:52.606 11:09:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:27:53.542 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:53.542 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:53.542 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:53.542 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:53.542 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:53.542 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.542 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.800 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:53.800 "name": "raid_bdev1", 00:27:53.800 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:27:53.800 "strip_size_kb": 0, 00:27:53.800 "state": "online", 00:27:53.800 "raid_level": "raid1", 00:27:53.800 "superblock": true, 00:27:53.800 "num_base_bdevs": 2, 00:27:53.800 "num_base_bdevs_discovered": 2, 00:27:53.800 "num_base_bdevs_operational": 2, 00:27:53.800 "process": { 00:27:53.800 "type": "rebuild", 00:27:53.801 "target": "spare", 00:27:53.801 "progress": { 00:27:53.801 "blocks": 24576, 00:27:53.801 "percent": 38 00:27:53.801 } 00:27:53.801 }, 00:27:53.801 "base_bdevs_list": [ 00:27:53.801 { 00:27:53.801 "name": "spare", 00:27:53.801 "uuid": "83c72fec-c2bf-538b-8663-ee1494bc4a26", 00:27:53.801 "is_configured": true, 00:27:53.801 "data_offset": 2048, 00:27:53.801 "data_size": 63488 00:27:53.801 }, 00:27:53.801 { 00:27:53.801 "name": "BaseBdev2", 00:27:53.801 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:27:53.801 "is_configured": true, 00:27:53.801 "data_offset": 2048, 00:27:53.801 "data_size": 63488 00:27:53.801 } 00:27:53.801 ] 00:27:53.801 }' 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:27:53.801 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=884 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.801 11:10:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.060 11:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:54.060 "name": "raid_bdev1", 00:27:54.060 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:27:54.060 "strip_size_kb": 0, 00:27:54.060 "state": "online", 00:27:54.060 "raid_level": "raid1", 00:27:54.060 "superblock": true, 00:27:54.060 "num_base_bdevs": 2, 00:27:54.060 "num_base_bdevs_discovered": 2, 00:27:54.060 "num_base_bdevs_operational": 2, 00:27:54.060 "process": { 00:27:54.060 "type": "rebuild", 00:27:54.060 "target": "spare", 00:27:54.060 "progress": { 00:27:54.060 "blocks": 30720, 00:27:54.060 "percent": 48 00:27:54.060 } 00:27:54.060 }, 00:27:54.060 "base_bdevs_list": [ 00:27:54.060 { 00:27:54.060 "name": "spare", 00:27:54.060 "uuid": "83c72fec-c2bf-538b-8663-ee1494bc4a26", 00:27:54.060 "is_configured": true, 00:27:54.060 "data_offset": 2048, 00:27:54.060 "data_size": 63488 00:27:54.060 }, 00:27:54.060 { 00:27:54.060 "name": "BaseBdev2", 00:27:54.060 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:27:54.060 "is_configured": true, 00:27:54.060 "data_offset": 2048, 00:27:54.060 "data_size": 63488 00:27:54.060 } 00:27:54.060 ] 00:27:54.060 }' 00:27:54.060 11:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:54.060 11:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:54.060 11:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:54.060 11:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:54.060 11:10:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:55.438 "name": "raid_bdev1", 00:27:55.438 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:27:55.438 "strip_size_kb": 0, 00:27:55.438 "state": "online", 00:27:55.438 "raid_level": "raid1", 00:27:55.438 "superblock": true, 00:27:55.438 "num_base_bdevs": 2, 00:27:55.438 "num_base_bdevs_discovered": 2, 00:27:55.438 "num_base_bdevs_operational": 2, 00:27:55.438 "process": { 00:27:55.438 "type": "rebuild", 00:27:55.438 "target": "spare", 00:27:55.438 "progress": { 00:27:55.438 "blocks": 57344, 00:27:55.438 "percent": 90 00:27:55.438 } 00:27:55.438 }, 00:27:55.438 "base_bdevs_list": [ 00:27:55.438 { 00:27:55.438 "name": "spare", 00:27:55.438 "uuid": "83c72fec-c2bf-538b-8663-ee1494bc4a26", 00:27:55.438 "is_configured": true, 00:27:55.438 "data_offset": 2048, 00:27:55.438 "data_size": 63488 00:27:55.438 }, 00:27:55.438 { 00:27:55.438 "name": "BaseBdev2", 00:27:55.438 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:27:55.438 "is_configured": true, 00:27:55.438 "data_offset": 2048, 00:27:55.438 "data_size": 63488 00:27:55.438 } 00:27:55.438 ] 00:27:55.438 }' 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:55.438 11:10:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:27:55.697 [2024-07-25 11:10:02.627015] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:55.697 [2024-07-25 11:10:02.627093] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:55.697 [2024-07-25 11:10:02.627202] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:56.634 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:27:56.634 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:56.634 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:56.634 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:56.634 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:56.634 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:56.634 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.634 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.634 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:56.634 "name": "raid_bdev1", 00:27:56.634 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:27:56.634 "strip_size_kb": 0, 00:27:56.634 "state": "online", 00:27:56.634 "raid_level": "raid1", 00:27:56.634 "superblock": true, 00:27:56.634 "num_base_bdevs": 2, 00:27:56.634 "num_base_bdevs_discovered": 2, 00:27:56.634 "num_base_bdevs_operational": 2, 00:27:56.634 "base_bdevs_list": [ 00:27:56.634 { 00:27:56.634 "name": "spare", 00:27:56.634 "uuid": "83c72fec-c2bf-538b-8663-ee1494bc4a26", 00:27:56.634 "is_configured": true, 00:27:56.634 "data_offset": 2048, 00:27:56.634 "data_size": 63488 00:27:56.634 }, 00:27:56.634 { 00:27:56.634 "name": "BaseBdev2", 00:27:56.634 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:27:56.634 "is_configured": true, 00:27:56.634 "data_offset": 2048, 00:27:56.634 "data_size": 63488 00:27:56.634 } 00:27:56.634 ] 00:27:56.634 }' 00:27:56.634 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:56.893 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:56.893 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:56.893 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:56.893 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:27:56.893 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:56.893 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:56.893 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:56.893 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:56.893 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:56.893 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:56.893 11:10:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:57.153 "name": "raid_bdev1", 00:27:57.153 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:27:57.153 "strip_size_kb": 0, 00:27:57.153 "state": "online", 00:27:57.153 "raid_level": "raid1", 00:27:57.153 "superblock": true, 00:27:57.153 "num_base_bdevs": 2, 00:27:57.153 "num_base_bdevs_discovered": 2, 00:27:57.153 "num_base_bdevs_operational": 2, 00:27:57.153 "base_bdevs_list": [ 00:27:57.153 { 00:27:57.153 "name": "spare", 00:27:57.153 "uuid": "83c72fec-c2bf-538b-8663-ee1494bc4a26", 00:27:57.153 "is_configured": true, 00:27:57.153 "data_offset": 2048, 00:27:57.153 "data_size": 63488 00:27:57.153 }, 00:27:57.153 { 00:27:57.153 "name": "BaseBdev2", 00:27:57.153 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:27:57.153 "is_configured": true, 00:27:57.153 "data_offset": 2048, 00:27:57.153 "data_size": 63488 00:27:57.153 } 00:27:57.153 ] 00:27:57.153 }' 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.153 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.411 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:57.411 "name": "raid_bdev1", 00:27:57.411 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:27:57.411 "strip_size_kb": 0, 00:27:57.411 "state": "online", 00:27:57.411 "raid_level": "raid1", 00:27:57.411 "superblock": true, 00:27:57.411 "num_base_bdevs": 2, 00:27:57.411 "num_base_bdevs_discovered": 2, 00:27:57.411 "num_base_bdevs_operational": 2, 00:27:57.411 "base_bdevs_list": [ 00:27:57.411 { 00:27:57.411 "name": "spare", 00:27:57.411 "uuid": "83c72fec-c2bf-538b-8663-ee1494bc4a26", 00:27:57.411 "is_configured": true, 00:27:57.411 "data_offset": 2048, 00:27:57.411 "data_size": 63488 00:27:57.411 }, 00:27:57.411 { 00:27:57.411 "name": "BaseBdev2", 00:27:57.411 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:27:57.411 "is_configured": true, 00:27:57.411 "data_offset": 2048, 00:27:57.411 "data_size": 63488 00:27:57.411 } 00:27:57.411 ] 00:27:57.411 }' 00:27:57.411 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:57.411 11:10:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:57.979 11:10:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:58.238 [2024-07-25 11:10:05.135575] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:58.238 [2024-07-25 11:10:05.135613] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:58.238 [2024-07-25 11:10:05.135714] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:58.238 [2024-07-25 11:10:05.135796] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:58.238 [2024-07-25 11:10:05.135817] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:58.238 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.238 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:58.497 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:58.756 /dev/nbd0 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:58.756 1+0 records in 00:27:58.756 1+0 records out 00:27:58.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264855 s, 15.5 MB/s 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:58.756 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:59.015 /dev/nbd1 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:59.015 1+0 records in 00:27:59.015 1+0 records out 00:27:59.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313654 s, 13.1 MB/s 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:59.015 11:10:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:59.274 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:59.274 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:59.274 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:59.274 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:59.274 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:59.274 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:59.274 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:27:59.533 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:59.792 11:10:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:00.052 [2024-07-25 11:10:07.055971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:00.052 [2024-07-25 11:10:07.056065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:00.052 [2024-07-25 11:10:07.056099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:28:00.052 [2024-07-25 11:10:07.056116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:00.052 [2024-07-25 11:10:07.058995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:00.052 [2024-07-25 11:10:07.059030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:00.052 [2024-07-25 11:10:07.059154] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:00.052 [2024-07-25 11:10:07.059229] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:00.052 [2024-07-25 11:10:07.059455] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:00.052 spare 00:28:00.052 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:00.052 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:00.052 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:00.052 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:00.052 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:00.052 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:00.052 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:00.052 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:00.052 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:00.052 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:00.052 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.052 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.052 [2024-07-25 11:10:07.159807] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:00.052 [2024-07-25 11:10:07.159845] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:00.052 [2024-07-25 11:10:07.160230] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc9320 00:28:00.052 [2024-07-25 11:10:07.160532] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:00.052 [2024-07-25 11:10:07.160548] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:28:00.052 [2024-07-25 11:10:07.160790] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:00.311 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:00.311 "name": "raid_bdev1", 00:28:00.311 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:28:00.311 "strip_size_kb": 0, 00:28:00.311 "state": "online", 00:28:00.311 "raid_level": "raid1", 00:28:00.311 "superblock": true, 00:28:00.311 "num_base_bdevs": 2, 00:28:00.311 "num_base_bdevs_discovered": 2, 00:28:00.311 "num_base_bdevs_operational": 2, 00:28:00.311 "base_bdevs_list": [ 00:28:00.311 { 00:28:00.311 "name": "spare", 00:28:00.311 "uuid": "83c72fec-c2bf-538b-8663-ee1494bc4a26", 00:28:00.311 "is_configured": true, 00:28:00.311 "data_offset": 2048, 00:28:00.311 "data_size": 63488 00:28:00.311 }, 00:28:00.311 { 00:28:00.311 "name": "BaseBdev2", 00:28:00.311 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:28:00.311 "is_configured": true, 00:28:00.311 "data_offset": 2048, 00:28:00.311 "data_size": 63488 00:28:00.311 } 00:28:00.311 ] 00:28:00.311 }' 00:28:00.311 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:00.311 11:10:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:00.878 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:00.878 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:00.878 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:00.878 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:00.878 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:00.878 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.878 11:10:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.137 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:01.137 "name": "raid_bdev1", 00:28:01.137 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:28:01.137 "strip_size_kb": 0, 00:28:01.137 "state": "online", 00:28:01.137 "raid_level": "raid1", 00:28:01.137 "superblock": true, 00:28:01.137 "num_base_bdevs": 2, 00:28:01.137 "num_base_bdevs_discovered": 2, 00:28:01.137 "num_base_bdevs_operational": 2, 00:28:01.137 "base_bdevs_list": [ 00:28:01.137 { 00:28:01.137 "name": "spare", 00:28:01.137 "uuid": "83c72fec-c2bf-538b-8663-ee1494bc4a26", 00:28:01.137 "is_configured": true, 00:28:01.137 "data_offset": 2048, 00:28:01.137 "data_size": 63488 00:28:01.137 }, 00:28:01.137 { 00:28:01.137 "name": "BaseBdev2", 00:28:01.137 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:28:01.137 "is_configured": true, 00:28:01.137 "data_offset": 2048, 00:28:01.137 "data_size": 63488 00:28:01.137 } 00:28:01.137 ] 00:28:01.137 }' 00:28:01.137 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:01.137 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:01.137 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:01.137 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:01.137 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.137 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:01.397 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:28:01.397 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:01.656 [2024-07-25 11:10:08.624791] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:01.656 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:01.656 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:01.656 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:01.656 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:01.656 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:01.656 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:01.656 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:01.656 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:01.656 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:01.656 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:01.656 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:01.656 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:01.916 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:01.916 "name": "raid_bdev1", 00:28:01.916 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:28:01.916 "strip_size_kb": 0, 00:28:01.916 "state": "online", 00:28:01.916 "raid_level": "raid1", 00:28:01.916 "superblock": true, 00:28:01.916 "num_base_bdevs": 2, 00:28:01.916 "num_base_bdevs_discovered": 1, 00:28:01.916 "num_base_bdevs_operational": 1, 00:28:01.916 "base_bdevs_list": [ 00:28:01.916 { 00:28:01.916 "name": null, 00:28:01.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.916 "is_configured": false, 00:28:01.916 "data_offset": 2048, 00:28:01.916 "data_size": 63488 00:28:01.916 }, 00:28:01.916 { 00:28:01.916 "name": "BaseBdev2", 00:28:01.916 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:28:01.916 "is_configured": true, 00:28:01.916 "data_offset": 2048, 00:28:01.916 "data_size": 63488 00:28:01.916 } 00:28:01.916 ] 00:28:01.916 }' 00:28:01.916 11:10:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:01.916 11:10:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:02.484 11:10:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:02.743 [2024-07-25 11:10:09.647568] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:02.743 [2024-07-25 11:10:09.647785] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:02.743 [2024-07-25 11:10:09.647812] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:02.743 [2024-07-25 11:10:09.647853] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:02.743 [2024-07-25 11:10:09.672650] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc93f0 00:28:02.743 [2024-07-25 11:10:09.674952] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:02.743 11:10:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:28:03.679 11:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:03.679 11:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:03.679 11:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:03.679 11:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:03.679 11:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:03.679 11:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.679 11:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.938 11:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:03.938 "name": "raid_bdev1", 00:28:03.938 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:28:03.938 "strip_size_kb": 0, 00:28:03.938 "state": "online", 00:28:03.938 "raid_level": "raid1", 00:28:03.938 "superblock": true, 00:28:03.938 "num_base_bdevs": 2, 00:28:03.938 "num_base_bdevs_discovered": 2, 00:28:03.938 "num_base_bdevs_operational": 2, 00:28:03.938 "process": { 00:28:03.938 "type": "rebuild", 00:28:03.938 "target": "spare", 00:28:03.938 "progress": { 00:28:03.938 "blocks": 24576, 00:28:03.938 "percent": 38 00:28:03.938 } 00:28:03.938 }, 00:28:03.938 "base_bdevs_list": [ 00:28:03.938 { 00:28:03.938 "name": "spare", 00:28:03.938 "uuid": "83c72fec-c2bf-538b-8663-ee1494bc4a26", 00:28:03.938 "is_configured": true, 00:28:03.938 "data_offset": 2048, 00:28:03.938 "data_size": 63488 00:28:03.938 }, 00:28:03.938 { 00:28:03.938 "name": "BaseBdev2", 00:28:03.938 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:28:03.938 "is_configured": true, 00:28:03.938 "data_offset": 2048, 00:28:03.938 "data_size": 63488 00:28:03.938 } 00:28:03.938 ] 00:28:03.938 }' 00:28:03.938 11:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:03.938 11:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:03.938 11:10:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:03.938 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:03.938 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:04.197 [2024-07-25 11:10:11.228429] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:04.197 [2024-07-25 11:10:11.287909] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:04.197 [2024-07-25 11:10:11.287976] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.197 [2024-07-25 11:10:11.287999] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:04.197 [2024-07-25 11:10:11.288014] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:04.457 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:04.457 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:04.457 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:04.457 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:04.457 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:04.457 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:04.457 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:04.457 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:04.457 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:04.457 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:04.457 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.457 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.716 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:04.716 "name": "raid_bdev1", 00:28:04.716 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:28:04.716 "strip_size_kb": 0, 00:28:04.716 "state": "online", 00:28:04.716 "raid_level": "raid1", 00:28:04.716 "superblock": true, 00:28:04.716 "num_base_bdevs": 2, 00:28:04.716 "num_base_bdevs_discovered": 1, 00:28:04.716 "num_base_bdevs_operational": 1, 00:28:04.716 "base_bdevs_list": [ 00:28:04.716 { 00:28:04.716 "name": null, 00:28:04.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.716 "is_configured": false, 00:28:04.716 "data_offset": 2048, 00:28:04.716 "data_size": 63488 00:28:04.716 }, 00:28:04.716 { 00:28:04.716 "name": "BaseBdev2", 00:28:04.716 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:28:04.716 "is_configured": true, 00:28:04.716 "data_offset": 2048, 00:28:04.716 "data_size": 63488 00:28:04.716 } 00:28:04.716 ] 00:28:04.716 }' 00:28:04.716 11:10:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:04.716 11:10:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:05.294 11:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:05.294 [2024-07-25 11:10:12.360013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:05.294 [2024-07-25 11:10:12.360087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:05.294 [2024-07-25 11:10:12.360116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:28:05.294 [2024-07-25 11:10:12.360136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:05.294 [2024-07-25 11:10:12.360776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:05.294 [2024-07-25 11:10:12.360810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:05.294 [2024-07-25 11:10:12.360927] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:05.294 [2024-07-25 11:10:12.360953] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:05.294 [2024-07-25 11:10:12.360970] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:05.294 [2024-07-25 11:10:12.361001] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:05.294 [2024-07-25 11:10:12.386517] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc94c0 00:28:05.294 spare 00:28:05.294 [2024-07-25 11:10:12.388847] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:05.294 11:10:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:06.724 "name": "raid_bdev1", 00:28:06.724 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:28:06.724 "strip_size_kb": 0, 00:28:06.724 "state": "online", 00:28:06.724 "raid_level": "raid1", 00:28:06.724 "superblock": true, 00:28:06.724 "num_base_bdevs": 2, 00:28:06.724 "num_base_bdevs_discovered": 2, 00:28:06.724 "num_base_bdevs_operational": 2, 00:28:06.724 "process": { 00:28:06.724 "type": "rebuild", 00:28:06.724 "target": "spare", 00:28:06.724 "progress": { 00:28:06.724 "blocks": 24576, 00:28:06.724 "percent": 38 00:28:06.724 } 00:28:06.724 }, 00:28:06.724 "base_bdevs_list": [ 00:28:06.724 { 00:28:06.724 "name": "spare", 00:28:06.724 "uuid": "83c72fec-c2bf-538b-8663-ee1494bc4a26", 00:28:06.724 "is_configured": true, 00:28:06.724 "data_offset": 2048, 00:28:06.724 "data_size": 63488 00:28:06.724 }, 00:28:06.724 { 00:28:06.724 "name": "BaseBdev2", 00:28:06.724 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:28:06.724 "is_configured": true, 00:28:06.724 "data_offset": 2048, 00:28:06.724 "data_size": 63488 00:28:06.724 } 00:28:06.724 ] 00:28:06.724 }' 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:06.724 11:10:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:06.983 [2024-07-25 11:10:13.942318] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:06.983 [2024-07-25 11:10:14.001922] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:06.983 [2024-07-25 11:10:14.001983] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:06.983 [2024-07-25 11:10:14.002007] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:06.983 [2024-07-25 11:10:14.002020] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:06.983 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:06.983 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:06.983 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:06.983 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:06.983 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:06.983 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:06.983 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:06.983 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:06.983 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:06.983 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:06.983 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.983 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.242 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:07.242 "name": "raid_bdev1", 00:28:07.242 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:28:07.242 "strip_size_kb": 0, 00:28:07.242 "state": "online", 00:28:07.242 "raid_level": "raid1", 00:28:07.242 "superblock": true, 00:28:07.242 "num_base_bdevs": 2, 00:28:07.242 "num_base_bdevs_discovered": 1, 00:28:07.242 "num_base_bdevs_operational": 1, 00:28:07.242 "base_bdevs_list": [ 00:28:07.242 { 00:28:07.242 "name": null, 00:28:07.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.242 "is_configured": false, 00:28:07.242 "data_offset": 2048, 00:28:07.242 "data_size": 63488 00:28:07.242 }, 00:28:07.242 { 00:28:07.242 "name": "BaseBdev2", 00:28:07.242 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:28:07.242 "is_configured": true, 00:28:07.242 "data_offset": 2048, 00:28:07.242 "data_size": 63488 00:28:07.242 } 00:28:07.242 ] 00:28:07.242 }' 00:28:07.242 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:07.242 11:10:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.808 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:07.808 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:07.808 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:07.808 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:07.808 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:07.808 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.808 11:10:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.066 11:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:08.067 "name": "raid_bdev1", 00:28:08.067 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:28:08.067 "strip_size_kb": 0, 00:28:08.067 "state": "online", 00:28:08.067 "raid_level": "raid1", 00:28:08.067 "superblock": true, 00:28:08.067 "num_base_bdevs": 2, 00:28:08.067 "num_base_bdevs_discovered": 1, 00:28:08.067 "num_base_bdevs_operational": 1, 00:28:08.067 "base_bdevs_list": [ 00:28:08.067 { 00:28:08.067 "name": null, 00:28:08.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.067 "is_configured": false, 00:28:08.067 "data_offset": 2048, 00:28:08.067 "data_size": 63488 00:28:08.067 }, 00:28:08.067 { 00:28:08.067 "name": "BaseBdev2", 00:28:08.067 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:28:08.067 "is_configured": true, 00:28:08.067 "data_offset": 2048, 00:28:08.067 "data_size": 63488 00:28:08.067 } 00:28:08.067 ] 00:28:08.067 }' 00:28:08.067 11:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:08.067 11:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:08.067 11:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:08.067 11:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:08.067 11:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:08.325 11:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:08.584 [2024-07-25 11:10:15.616456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:08.584 [2024-07-25 11:10:15.616541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:08.584 [2024-07-25 11:10:15.616575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:28:08.584 [2024-07-25 11:10:15.616593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:08.584 [2024-07-25 11:10:15.617215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:08.584 [2024-07-25 11:10:15.617245] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:08.584 [2024-07-25 11:10:15.617351] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:08.584 [2024-07-25 11:10:15.617372] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:08.584 [2024-07-25 11:10:15.617389] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:08.584 BaseBdev1 00:28:08.584 11:10:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:28:09.521 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:09.521 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:09.779 "name": "raid_bdev1", 00:28:09.779 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:28:09.779 "strip_size_kb": 0, 00:28:09.779 "state": "online", 00:28:09.779 "raid_level": "raid1", 00:28:09.779 "superblock": true, 00:28:09.779 "num_base_bdevs": 2, 00:28:09.779 "num_base_bdevs_discovered": 1, 00:28:09.779 "num_base_bdevs_operational": 1, 00:28:09.779 "base_bdevs_list": [ 00:28:09.779 { 00:28:09.779 "name": null, 00:28:09.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.779 "is_configured": false, 00:28:09.779 "data_offset": 2048, 00:28:09.779 "data_size": 63488 00:28:09.779 }, 00:28:09.779 { 00:28:09.779 "name": "BaseBdev2", 00:28:09.779 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:28:09.779 "is_configured": true, 00:28:09.779 "data_offset": 2048, 00:28:09.779 "data_size": 63488 00:28:09.779 } 00:28:09.779 ] 00:28:09.779 }' 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:09.779 11:10:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:10.346 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:10.346 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:10.346 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:10.346 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:10.346 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:10.346 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.346 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.605 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:10.605 "name": "raid_bdev1", 00:28:10.605 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:28:10.605 "strip_size_kb": 0, 00:28:10.605 "state": "online", 00:28:10.605 "raid_level": "raid1", 00:28:10.605 "superblock": true, 00:28:10.605 "num_base_bdevs": 2, 00:28:10.605 "num_base_bdevs_discovered": 1, 00:28:10.605 "num_base_bdevs_operational": 1, 00:28:10.605 "base_bdevs_list": [ 00:28:10.605 { 00:28:10.605 "name": null, 00:28:10.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:10.605 "is_configured": false, 00:28:10.605 "data_offset": 2048, 00:28:10.605 "data_size": 63488 00:28:10.605 }, 00:28:10.605 { 00:28:10.605 "name": "BaseBdev2", 00:28:10.605 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:28:10.605 "is_configured": true, 00:28:10.605 "data_offset": 2048, 00:28:10.605 "data_size": 63488 00:28:10.605 } 00:28:10.605 ] 00:28:10.605 }' 00:28:10.605 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:10.605 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:10.605 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:28:10.863 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:10.863 [2024-07-25 11:10:17.970829] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:10.863 [2024-07-25 11:10:17.971005] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:10.863 [2024-07-25 11:10:17.971026] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:10.863 request: 00:28:10.863 { 00:28:10.863 "base_bdev": "BaseBdev1", 00:28:10.863 "raid_bdev": "raid_bdev1", 00:28:10.863 "method": "bdev_raid_add_base_bdev", 00:28:10.863 "req_id": 1 00:28:10.863 } 00:28:10.863 Got JSON-RPC error response 00:28:10.863 response: 00:28:10.863 { 00:28:10.863 "code": -22, 00:28:10.863 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:10.863 } 00:28:11.121 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:28:11.121 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:11.121 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:11.121 11:10:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:11.121 11:10:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:28:12.075 11:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:12.075 11:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:12.075 11:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:12.075 11:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:12.076 11:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:12.076 11:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:12.076 11:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:12.076 11:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:12.076 11:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:12.076 11:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:12.076 11:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.076 11:10:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.334 11:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:12.334 "name": "raid_bdev1", 00:28:12.334 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:28:12.334 "strip_size_kb": 0, 00:28:12.334 "state": "online", 00:28:12.334 "raid_level": "raid1", 00:28:12.334 "superblock": true, 00:28:12.334 "num_base_bdevs": 2, 00:28:12.334 "num_base_bdevs_discovered": 1, 00:28:12.334 "num_base_bdevs_operational": 1, 00:28:12.334 "base_bdevs_list": [ 00:28:12.334 { 00:28:12.334 "name": null, 00:28:12.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.334 "is_configured": false, 00:28:12.334 "data_offset": 2048, 00:28:12.334 "data_size": 63488 00:28:12.334 }, 00:28:12.334 { 00:28:12.334 "name": "BaseBdev2", 00:28:12.334 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:28:12.334 "is_configured": true, 00:28:12.334 "data_offset": 2048, 00:28:12.334 "data_size": 63488 00:28:12.334 } 00:28:12.334 ] 00:28:12.334 }' 00:28:12.334 11:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:12.334 11:10:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:12.902 11:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:12.902 11:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:12.902 11:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:12.902 11:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:12.902 11:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:12.902 11:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.902 11:10:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.902 11:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:12.902 "name": "raid_bdev1", 00:28:12.902 "uuid": "892f74c5-9404-4eea-a68c-68b88b62e0b9", 00:28:12.902 "strip_size_kb": 0, 00:28:12.902 "state": "online", 00:28:12.902 "raid_level": "raid1", 00:28:12.902 "superblock": true, 00:28:12.902 "num_base_bdevs": 2, 00:28:12.902 "num_base_bdevs_discovered": 1, 00:28:12.902 "num_base_bdevs_operational": 1, 00:28:12.902 "base_bdevs_list": [ 00:28:12.902 { 00:28:12.902 "name": null, 00:28:12.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.902 "is_configured": false, 00:28:12.902 "data_offset": 2048, 00:28:12.902 "data_size": 63488 00:28:12.902 }, 00:28:12.902 { 00:28:12.902 "name": "BaseBdev2", 00:28:12.902 "uuid": "61035fd6-f3d7-5fb3-b547-0b4ecfee8b12", 00:28:12.902 "is_configured": true, 00:28:12.902 "data_offset": 2048, 00:28:12.902 "data_size": 63488 00:28:12.902 } 00:28:12.902 ] 00:28:12.902 }' 00:28:12.902 11:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 3699242 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3699242 ']' 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 3699242 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3699242 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3699242' 00:28:13.161 killing process with pid 3699242 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 3699242 00:28:13.161 Received shutdown signal, test time was about 60.000000 seconds 00:28:13.161 00:28:13.161 Latency(us) 00:28:13.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.161 =================================================================================================================== 00:28:13.161 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:13.161 [2024-07-25 11:10:20.152469] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:13.161 11:10:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 3699242 00:28:13.161 [2024-07-25 11:10:20.152624] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:13.161 [2024-07-25 11:10:20.152693] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:13.161 [2024-07-25 11:10:20.152709] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:28:13.420 [2024-07-25 11:10:20.484826] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:28:15.326 00:28:15.326 real 0m37.435s 00:28:15.326 user 0m51.853s 00:28:15.326 sys 0m6.828s 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.326 ************************************ 00:28:15.326 END TEST raid_rebuild_test_sb 00:28:15.326 ************************************ 00:28:15.326 11:10:22 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:28:15.326 11:10:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:15.326 11:10:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:15.326 11:10:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:15.326 ************************************ 00:28:15.326 START TEST raid_rebuild_test_io 00:28:15.326 ************************************ 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=3705996 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 3705996 /var/tmp/spdk-raid.sock 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 3705996 ']' 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:15.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.326 11:10:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:15.326 [2024-07-25 11:10:22.292786] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:15.326 [2024-07-25 11:10:22.292905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3705996 ] 00:28:15.326 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:15.326 Zero copy mechanism will not be used. 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.326 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:15.326 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:15.327 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:15.327 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:15.585 [2024-07-25 11:10:22.517452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.843 [2024-07-25 11:10:22.791194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.101 [2024-07-25 11:10:23.121274] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:16.101 [2024-07-25 11:10:23.121309] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:16.362 11:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:16.362 11:10:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:28:16.362 11:10:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:16.362 11:10:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:16.621 BaseBdev1_malloc 00:28:16.621 11:10:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:16.880 [2024-07-25 11:10:23.795561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:16.880 [2024-07-25 11:10:23.795626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.880 [2024-07-25 11:10:23.795656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:28:16.880 [2024-07-25 11:10:23.795674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.880 [2024-07-25 11:10:23.798447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.880 [2024-07-25 11:10:23.798487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:16.880 BaseBdev1 00:28:16.880 11:10:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:16.880 11:10:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:17.140 BaseBdev2_malloc 00:28:17.140 11:10:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:17.399 [2024-07-25 11:10:24.295960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:17.399 [2024-07-25 11:10:24.296022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:17.399 [2024-07-25 11:10:24.296049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:28:17.399 [2024-07-25 11:10:24.296070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:17.399 [2024-07-25 11:10:24.298809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:17.399 [2024-07-25 11:10:24.298846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:17.399 BaseBdev2 00:28:17.399 11:10:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:17.658 spare_malloc 00:28:17.658 11:10:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:17.918 spare_delay 00:28:17.918 11:10:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:17.918 [2024-07-25 11:10:25.014267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:17.918 [2024-07-25 11:10:25.014321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:17.918 [2024-07-25 11:10:25.014349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:28:17.918 [2024-07-25 11:10:25.014366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:17.918 [2024-07-25 11:10:25.017097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:17.918 [2024-07-25 11:10:25.017134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:17.918 spare 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:18.177 [2024-07-25 11:10:25.250937] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:18.177 [2024-07-25 11:10:25.253268] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:18.177 [2024-07-25 11:10:25.253372] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:18.177 [2024-07-25 11:10:25.253392] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:18.177 [2024-07-25 11:10:25.253767] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:28:18.177 [2024-07-25 11:10:25.254030] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:18.177 [2024-07-25 11:10:25.254049] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:18.177 [2024-07-25 11:10:25.254295] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.177 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.436 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:18.436 "name": "raid_bdev1", 00:28:18.436 "uuid": "055b797f-e679-4155-967e-8adcb17e8246", 00:28:18.436 "strip_size_kb": 0, 00:28:18.436 "state": "online", 00:28:18.436 "raid_level": "raid1", 00:28:18.436 "superblock": false, 00:28:18.436 "num_base_bdevs": 2, 00:28:18.436 "num_base_bdevs_discovered": 2, 00:28:18.436 "num_base_bdevs_operational": 2, 00:28:18.436 "base_bdevs_list": [ 00:28:18.436 { 00:28:18.436 "name": "BaseBdev1", 00:28:18.436 "uuid": "712a6a36-a548-57f2-8861-18e2df368374", 00:28:18.436 "is_configured": true, 00:28:18.436 "data_offset": 0, 00:28:18.436 "data_size": 65536 00:28:18.436 }, 00:28:18.436 { 00:28:18.436 "name": "BaseBdev2", 00:28:18.436 "uuid": "2d09a110-24b2-56e8-8a05-f40b82105a40", 00:28:18.436 "is_configured": true, 00:28:18.436 "data_offset": 0, 00:28:18.436 "data_size": 65536 00:28:18.436 } 00:28:18.436 ] 00:28:18.436 }' 00:28:18.436 11:10:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:18.436 11:10:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:19.003 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:28:19.003 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:19.262 [2024-07-25 11:10:26.286045] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:19.262 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:28:19.262 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.262 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:19.522 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:28:19.522 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:28:19.522 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:19.522 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:19.782 [2024-07-25 11:10:26.655062] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:28:19.782 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:19.782 Zero copy mechanism will not be used. 00:28:19.782 Running I/O for 60 seconds... 00:28:19.782 [2024-07-25 11:10:26.760889] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:19.782 [2024-07-25 11:10:26.769019] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:28:19.782 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:19.782 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:19.782 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:19.782 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:19.782 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:19.782 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:19.782 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:19.782 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:19.782 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:19.782 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:19.782 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:19.782 11:10:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.041 11:10:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:20.041 "name": "raid_bdev1", 00:28:20.041 "uuid": "055b797f-e679-4155-967e-8adcb17e8246", 00:28:20.041 "strip_size_kb": 0, 00:28:20.041 "state": "online", 00:28:20.041 "raid_level": "raid1", 00:28:20.041 "superblock": false, 00:28:20.041 "num_base_bdevs": 2, 00:28:20.041 "num_base_bdevs_discovered": 1, 00:28:20.041 "num_base_bdevs_operational": 1, 00:28:20.041 "base_bdevs_list": [ 00:28:20.041 { 00:28:20.041 "name": null, 00:28:20.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:20.041 "is_configured": false, 00:28:20.041 "data_offset": 0, 00:28:20.041 "data_size": 65536 00:28:20.041 }, 00:28:20.041 { 00:28:20.041 "name": "BaseBdev2", 00:28:20.041 "uuid": "2d09a110-24b2-56e8-8a05-f40b82105a40", 00:28:20.041 "is_configured": true, 00:28:20.041 "data_offset": 0, 00:28:20.041 "data_size": 65536 00:28:20.041 } 00:28:20.041 ] 00:28:20.041 }' 00:28:20.041 11:10:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:20.041 11:10:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:20.610 11:10:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:20.906 [2024-07-25 11:10:27.846444] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:20.906 11:10:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:20.906 [2024-07-25 11:10:27.932656] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:28:20.906 [2024-07-25 11:10:27.935014] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:21.165 [2024-07-25 11:10:28.062235] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:21.165 [2024-07-25 11:10:28.070329] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:21.424 [2024-07-25 11:10:28.305165] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:21.424 [2024-07-25 11:10:28.305407] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:21.684 [2024-07-25 11:10:28.668670] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:21.684 [2024-07-25 11:10:28.778523] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:21.943 11:10:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:21.943 11:10:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:21.943 11:10:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:21.943 11:10:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:21.943 11:10:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:21.943 11:10:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.943 11:10:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.202 [2024-07-25 11:10:29.123576] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:22.202 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:22.202 "name": "raid_bdev1", 00:28:22.202 "uuid": "055b797f-e679-4155-967e-8adcb17e8246", 00:28:22.202 "strip_size_kb": 0, 00:28:22.202 "state": "online", 00:28:22.202 "raid_level": "raid1", 00:28:22.202 "superblock": false, 00:28:22.202 "num_base_bdevs": 2, 00:28:22.202 "num_base_bdevs_discovered": 2, 00:28:22.202 "num_base_bdevs_operational": 2, 00:28:22.202 "process": { 00:28:22.202 "type": "rebuild", 00:28:22.202 "target": "spare", 00:28:22.203 "progress": { 00:28:22.203 "blocks": 16384, 00:28:22.203 "percent": 25 00:28:22.203 } 00:28:22.203 }, 00:28:22.203 "base_bdevs_list": [ 00:28:22.203 { 00:28:22.203 "name": "spare", 00:28:22.203 "uuid": "54b2dc6e-0119-5c1d-84c1-1ef1c9424dae", 00:28:22.203 "is_configured": true, 00:28:22.203 "data_offset": 0, 00:28:22.203 "data_size": 65536 00:28:22.203 }, 00:28:22.203 { 00:28:22.203 "name": "BaseBdev2", 00:28:22.203 "uuid": "2d09a110-24b2-56e8-8a05-f40b82105a40", 00:28:22.203 "is_configured": true, 00:28:22.203 "data_offset": 0, 00:28:22.203 "data_size": 65536 00:28:22.203 } 00:28:22.203 ] 00:28:22.203 }' 00:28:22.203 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:22.203 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:22.203 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:22.203 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:22.203 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:22.461 [2024-07-25 11:10:29.441634] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:22.461 [2024-07-25 11:10:29.507802] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:22.721 [2024-07-25 11:10:29.608421] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:22.721 [2024-07-25 11:10:29.617904] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:22.721 [2024-07-25 11:10:29.617944] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:22.721 [2024-07-25 11:10:29.617958] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:22.721 [2024-07-25 11:10:29.667715] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:28:22.721 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:22.721 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:22.721 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:22.721 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:22.721 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:22.721 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:22.721 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:22.721 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:22.721 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:22.721 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:22.721 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:22.721 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:22.980 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:22.980 "name": "raid_bdev1", 00:28:22.980 "uuid": "055b797f-e679-4155-967e-8adcb17e8246", 00:28:22.980 "strip_size_kb": 0, 00:28:22.980 "state": "online", 00:28:22.980 "raid_level": "raid1", 00:28:22.980 "superblock": false, 00:28:22.980 "num_base_bdevs": 2, 00:28:22.980 "num_base_bdevs_discovered": 1, 00:28:22.980 "num_base_bdevs_operational": 1, 00:28:22.980 "base_bdevs_list": [ 00:28:22.980 { 00:28:22.980 "name": null, 00:28:22.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:22.980 "is_configured": false, 00:28:22.980 "data_offset": 0, 00:28:22.980 "data_size": 65536 00:28:22.980 }, 00:28:22.980 { 00:28:22.980 "name": "BaseBdev2", 00:28:22.980 "uuid": "2d09a110-24b2-56e8-8a05-f40b82105a40", 00:28:22.980 "is_configured": true, 00:28:22.980 "data_offset": 0, 00:28:22.980 "data_size": 65536 00:28:22.980 } 00:28:22.980 ] 00:28:22.980 }' 00:28:22.980 11:10:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:22.980 11:10:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:23.548 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:23.548 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:23.548 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:23.548 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:23.548 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:23.548 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.548 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.806 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:23.806 "name": "raid_bdev1", 00:28:23.806 "uuid": "055b797f-e679-4155-967e-8adcb17e8246", 00:28:23.806 "strip_size_kb": 0, 00:28:23.806 "state": "online", 00:28:23.806 "raid_level": "raid1", 00:28:23.806 "superblock": false, 00:28:23.806 "num_base_bdevs": 2, 00:28:23.806 "num_base_bdevs_discovered": 1, 00:28:23.806 "num_base_bdevs_operational": 1, 00:28:23.806 "base_bdevs_list": [ 00:28:23.806 { 00:28:23.806 "name": null, 00:28:23.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:23.806 "is_configured": false, 00:28:23.806 "data_offset": 0, 00:28:23.806 "data_size": 65536 00:28:23.806 }, 00:28:23.806 { 00:28:23.806 "name": "BaseBdev2", 00:28:23.806 "uuid": "2d09a110-24b2-56e8-8a05-f40b82105a40", 00:28:23.806 "is_configured": true, 00:28:23.806 "data_offset": 0, 00:28:23.806 "data_size": 65536 00:28:23.806 } 00:28:23.806 ] 00:28:23.807 }' 00:28:23.807 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:23.807 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:23.807 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:23.807 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:23.807 11:10:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:24.065 [2024-07-25 11:10:31.100092] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:24.065 11:10:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:28:24.324 [2024-07-25 11:10:31.198791] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:28:24.324 [2024-07-25 11:10:31.201122] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:24.324 [2024-07-25 11:10:31.318870] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:24.324 [2024-07-25 11:10:31.319305] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:24.583 [2024-07-25 11:10:31.562616] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:24.583 [2024-07-25 11:10:31.562878] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:24.842 [2024-07-25 11:10:31.909272] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:25.100 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:25.100 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:25.100 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:25.100 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:25.100 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:25.100 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.100 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.359 [2024-07-25 11:10:32.285324] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:25.359 [2024-07-25 11:10:32.285809] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:25.359 [2024-07-25 11:10:32.423010] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:25.359 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:25.359 "name": "raid_bdev1", 00:28:25.359 "uuid": "055b797f-e679-4155-967e-8adcb17e8246", 00:28:25.359 "strip_size_kb": 0, 00:28:25.359 "state": "online", 00:28:25.359 "raid_level": "raid1", 00:28:25.359 "superblock": false, 00:28:25.359 "num_base_bdevs": 2, 00:28:25.359 "num_base_bdevs_discovered": 2, 00:28:25.359 "num_base_bdevs_operational": 2, 00:28:25.359 "process": { 00:28:25.359 "type": "rebuild", 00:28:25.359 "target": "spare", 00:28:25.359 "progress": { 00:28:25.359 "blocks": 14336, 00:28:25.359 "percent": 21 00:28:25.359 } 00:28:25.359 }, 00:28:25.359 "base_bdevs_list": [ 00:28:25.359 { 00:28:25.359 "name": "spare", 00:28:25.359 "uuid": "54b2dc6e-0119-5c1d-84c1-1ef1c9424dae", 00:28:25.360 "is_configured": true, 00:28:25.360 "data_offset": 0, 00:28:25.360 "data_size": 65536 00:28:25.360 }, 00:28:25.360 { 00:28:25.360 "name": "BaseBdev2", 00:28:25.360 "uuid": "2d09a110-24b2-56e8-8a05-f40b82105a40", 00:28:25.360 "is_configured": true, 00:28:25.360 "data_offset": 0, 00:28:25.360 "data_size": 65536 00:28:25.360 } 00:28:25.360 ] 00:28:25.360 }' 00:28:25.360 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=916 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.619 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.619 [2024-07-25 11:10:32.666825] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:25.878 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:25.878 "name": "raid_bdev1", 00:28:25.878 "uuid": "055b797f-e679-4155-967e-8adcb17e8246", 00:28:25.878 "strip_size_kb": 0, 00:28:25.878 "state": "online", 00:28:25.878 "raid_level": "raid1", 00:28:25.878 "superblock": false, 00:28:25.878 "num_base_bdevs": 2, 00:28:25.878 "num_base_bdevs_discovered": 2, 00:28:25.878 "num_base_bdevs_operational": 2, 00:28:25.878 "process": { 00:28:25.878 "type": "rebuild", 00:28:25.878 "target": "spare", 00:28:25.878 "progress": { 00:28:25.878 "blocks": 20480, 00:28:25.878 "percent": 31 00:28:25.878 } 00:28:25.878 }, 00:28:25.878 "base_bdevs_list": [ 00:28:25.878 { 00:28:25.878 "name": "spare", 00:28:25.878 "uuid": "54b2dc6e-0119-5c1d-84c1-1ef1c9424dae", 00:28:25.878 "is_configured": true, 00:28:25.878 "data_offset": 0, 00:28:25.878 "data_size": 65536 00:28:25.878 }, 00:28:25.878 { 00:28:25.878 "name": "BaseBdev2", 00:28:25.878 "uuid": "2d09a110-24b2-56e8-8a05-f40b82105a40", 00:28:25.878 "is_configured": true, 00:28:25.878 "data_offset": 0, 00:28:25.878 "data_size": 65536 00:28:25.878 } 00:28:25.878 ] 00:28:25.878 }' 00:28:25.878 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:25.878 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:25.878 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:25.878 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:25.878 11:10:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:25.878 [2024-07-25 11:10:32.878217] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:26.444 [2024-07-25 11:10:33.326026] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:28:26.704 [2024-07-25 11:10:33.671203] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:28:26.963 11:10:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:26.963 11:10:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:26.963 11:10:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:26.963 11:10:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:26.963 11:10:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:26.963 11:10:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:26.963 11:10:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.963 11:10:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.222 11:10:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:27.222 "name": "raid_bdev1", 00:28:27.222 "uuid": "055b797f-e679-4155-967e-8adcb17e8246", 00:28:27.222 "strip_size_kb": 0, 00:28:27.222 "state": "online", 00:28:27.222 "raid_level": "raid1", 00:28:27.222 "superblock": false, 00:28:27.222 "num_base_bdevs": 2, 00:28:27.222 "num_base_bdevs_discovered": 2, 00:28:27.222 "num_base_bdevs_operational": 2, 00:28:27.222 "process": { 00:28:27.222 "type": "rebuild", 00:28:27.222 "target": "spare", 00:28:27.222 "progress": { 00:28:27.222 "blocks": 36864, 00:28:27.222 "percent": 56 00:28:27.222 } 00:28:27.222 }, 00:28:27.222 "base_bdevs_list": [ 00:28:27.222 { 00:28:27.222 "name": "spare", 00:28:27.222 "uuid": "54b2dc6e-0119-5c1d-84c1-1ef1c9424dae", 00:28:27.222 "is_configured": true, 00:28:27.222 "data_offset": 0, 00:28:27.222 "data_size": 65536 00:28:27.222 }, 00:28:27.222 { 00:28:27.222 "name": "BaseBdev2", 00:28:27.222 "uuid": "2d09a110-24b2-56e8-8a05-f40b82105a40", 00:28:27.222 "is_configured": true, 00:28:27.222 "data_offset": 0, 00:28:27.222 "data_size": 65536 00:28:27.222 } 00:28:27.222 ] 00:28:27.222 }' 00:28:27.222 11:10:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:27.222 11:10:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:27.222 11:10:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:27.222 11:10:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:27.222 11:10:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:27.222 [2024-07-25 11:10:34.218077] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:28:27.481 [2024-07-25 11:10:34.538302] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:28:28.052 [2024-07-25 11:10:34.879522] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:28:28.052 [2024-07-25 11:10:34.996900] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:28:28.311 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:28.311 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:28.311 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:28.311 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:28.311 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:28.311 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:28.311 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.311 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.311 [2024-07-25 11:10:35.316399] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:28:28.311 [2024-07-25 11:10:35.417930] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:28:28.311 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:28.311 "name": "raid_bdev1", 00:28:28.311 "uuid": "055b797f-e679-4155-967e-8adcb17e8246", 00:28:28.311 "strip_size_kb": 0, 00:28:28.311 "state": "online", 00:28:28.311 "raid_level": "raid1", 00:28:28.311 "superblock": false, 00:28:28.311 "num_base_bdevs": 2, 00:28:28.311 "num_base_bdevs_discovered": 2, 00:28:28.311 "num_base_bdevs_operational": 2, 00:28:28.311 "process": { 00:28:28.311 "type": "rebuild", 00:28:28.311 "target": "spare", 00:28:28.311 "progress": { 00:28:28.311 "blocks": 57344, 00:28:28.311 "percent": 87 00:28:28.311 } 00:28:28.311 }, 00:28:28.311 "base_bdevs_list": [ 00:28:28.311 { 00:28:28.311 "name": "spare", 00:28:28.311 "uuid": "54b2dc6e-0119-5c1d-84c1-1ef1c9424dae", 00:28:28.311 "is_configured": true, 00:28:28.311 "data_offset": 0, 00:28:28.311 "data_size": 65536 00:28:28.311 }, 00:28:28.311 { 00:28:28.311 "name": "BaseBdev2", 00:28:28.311 "uuid": "2d09a110-24b2-56e8-8a05-f40b82105a40", 00:28:28.311 "is_configured": true, 00:28:28.311 "data_offset": 0, 00:28:28.311 "data_size": 65536 00:28:28.311 } 00:28:28.311 ] 00:28:28.311 }' 00:28:28.311 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:28.571 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:28.571 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:28.571 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:28.571 11:10:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:28.830 [2024-07-25 11:10:35.755102] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:28.830 [2024-07-25 11:10:35.855324] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:28.830 [2024-07-25 11:10:35.857040] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:29.399 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:29.399 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:29.399 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:29.399 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:29.399 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:29.399 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:29.399 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.399 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.659 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:29.659 "name": "raid_bdev1", 00:28:29.659 "uuid": "055b797f-e679-4155-967e-8adcb17e8246", 00:28:29.659 "strip_size_kb": 0, 00:28:29.659 "state": "online", 00:28:29.659 "raid_level": "raid1", 00:28:29.659 "superblock": false, 00:28:29.659 "num_base_bdevs": 2, 00:28:29.659 "num_base_bdevs_discovered": 2, 00:28:29.659 "num_base_bdevs_operational": 2, 00:28:29.659 "base_bdevs_list": [ 00:28:29.659 { 00:28:29.659 "name": "spare", 00:28:29.659 "uuid": "54b2dc6e-0119-5c1d-84c1-1ef1c9424dae", 00:28:29.659 "is_configured": true, 00:28:29.660 "data_offset": 0, 00:28:29.660 "data_size": 65536 00:28:29.660 }, 00:28:29.660 { 00:28:29.660 "name": "BaseBdev2", 00:28:29.660 "uuid": "2d09a110-24b2-56e8-8a05-f40b82105a40", 00:28:29.660 "is_configured": true, 00:28:29.660 "data_offset": 0, 00:28:29.660 "data_size": 65536 00:28:29.660 } 00:28:29.660 ] 00:28:29.660 }' 00:28:29.660 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:29.919 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:29.919 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:29.919 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:29.919 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:28:29.919 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:29.919 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:29.919 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:29.920 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:29.920 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:29.920 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.920 11:10:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:30.180 "name": "raid_bdev1", 00:28:30.180 "uuid": "055b797f-e679-4155-967e-8adcb17e8246", 00:28:30.180 "strip_size_kb": 0, 00:28:30.180 "state": "online", 00:28:30.180 "raid_level": "raid1", 00:28:30.180 "superblock": false, 00:28:30.180 "num_base_bdevs": 2, 00:28:30.180 "num_base_bdevs_discovered": 2, 00:28:30.180 "num_base_bdevs_operational": 2, 00:28:30.180 "base_bdevs_list": [ 00:28:30.180 { 00:28:30.180 "name": "spare", 00:28:30.180 "uuid": "54b2dc6e-0119-5c1d-84c1-1ef1c9424dae", 00:28:30.180 "is_configured": true, 00:28:30.180 "data_offset": 0, 00:28:30.180 "data_size": 65536 00:28:30.180 }, 00:28:30.180 { 00:28:30.180 "name": "BaseBdev2", 00:28:30.180 "uuid": "2d09a110-24b2-56e8-8a05-f40b82105a40", 00:28:30.180 "is_configured": true, 00:28:30.180 "data_offset": 0, 00:28:30.180 "data_size": 65536 00:28:30.180 } 00:28:30.180 ] 00:28:30.180 }' 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.180 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.440 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:30.440 "name": "raid_bdev1", 00:28:30.440 "uuid": "055b797f-e679-4155-967e-8adcb17e8246", 00:28:30.440 "strip_size_kb": 0, 00:28:30.440 "state": "online", 00:28:30.440 "raid_level": "raid1", 00:28:30.440 "superblock": false, 00:28:30.440 "num_base_bdevs": 2, 00:28:30.440 "num_base_bdevs_discovered": 2, 00:28:30.440 "num_base_bdevs_operational": 2, 00:28:30.440 "base_bdevs_list": [ 00:28:30.440 { 00:28:30.440 "name": "spare", 00:28:30.440 "uuid": "54b2dc6e-0119-5c1d-84c1-1ef1c9424dae", 00:28:30.440 "is_configured": true, 00:28:30.440 "data_offset": 0, 00:28:30.440 "data_size": 65536 00:28:30.440 }, 00:28:30.440 { 00:28:30.440 "name": "BaseBdev2", 00:28:30.440 "uuid": "2d09a110-24b2-56e8-8a05-f40b82105a40", 00:28:30.440 "is_configured": true, 00:28:30.440 "data_offset": 0, 00:28:30.440 "data_size": 65536 00:28:30.440 } 00:28:30.440 ] 00:28:30.440 }' 00:28:30.440 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:30.440 11:10:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:31.007 11:10:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:31.265 [2024-07-25 11:10:38.180223] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:31.265 [2024-07-25 11:10:38.180269] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:31.265 00:28:31.265 Latency(us) 00:28:31.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:31.265 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:31.265 raid_bdev1 : 11.59 100.00 299.99 0.00 0.00 13754.85 329.32 118279.37 00:28:31.265 =================================================================================================================== 00:28:31.266 Total : 100.00 299.99 0.00 0.00 13754.85 329.32 118279.37 00:28:31.266 [2024-07-25 11:10:38.307518] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:31.266 [2024-07-25 11:10:38.307567] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:31.266 [2024-07-25 11:10:38.307659] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:31.266 [2024-07-25 11:10:38.307679] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:31.266 0 00:28:31.266 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.266 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:31.524 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:28:31.784 /dev/nbd0 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:31.784 1+0 records in 00:28:31.784 1+0 records out 00:28:31.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284088 s, 14.4 MB/s 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:31.784 11:10:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:28:32.043 /dev/nbd1 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:32.043 1+0 records in 00:28:32.043 1+0 records out 00:28:32.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288295 s, 14.2 MB/s 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:32.043 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:32.301 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:28:32.302 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:32.302 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:32.302 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:32.302 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:32.302 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:32.302 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:32.561 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 3705996 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 3705996 ']' 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 3705996 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3705996 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3705996' 00:28:32.820 killing process with pid 3705996 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 3705996 00:28:32.820 Received shutdown signal, test time was about 13.191351 seconds 00:28:32.820 00:28:32.820 Latency(us) 00:28:32.820 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.820 =================================================================================================================== 00:28:32.820 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.820 [2024-07-25 11:10:39.881044] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:32.820 11:10:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 3705996 00:28:33.079 [2024-07-25 11:10:40.118559] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:28:34.986 00:28:34.986 real 0m19.636s 00:28:34.986 user 0m28.243s 00:28:34.986 sys 0m2.878s 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:34.986 ************************************ 00:28:34.986 END TEST raid_rebuild_test_io 00:28:34.986 ************************************ 00:28:34.986 11:10:41 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:28:34.986 11:10:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:28:34.986 11:10:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:34.986 11:10:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:34.986 ************************************ 00:28:34.986 START TEST raid_rebuild_test_sb_io 00:28:34.986 ************************************ 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=3709403 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 3709403 /var/tmp/spdk-raid.sock 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 3709403 ']' 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:34.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:34.986 11:10:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:34.986 [2024-07-25 11:10:42.019108] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:34.986 [2024-07-25 11:10:42.019241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3709403 ] 00:28:34.986 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:34.986 Zero copy mechanism will not be used. 00:28:35.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.247 EAL: Requested device 0000:3d:01.0 cannot be used 00:28:35.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.247 EAL: Requested device 0000:3d:01.1 cannot be used 00:28:35.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.247 EAL: Requested device 0000:3d:01.2 cannot be used 00:28:35.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.247 EAL: Requested device 0000:3d:01.3 cannot be used 00:28:35.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.247 EAL: Requested device 0000:3d:01.4 cannot be used 00:28:35.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.247 EAL: Requested device 0000:3d:01.5 cannot be used 00:28:35.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.247 EAL: Requested device 0000:3d:01.6 cannot be used 00:28:35.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3d:01.7 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3d:02.0 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3d:02.1 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3d:02.2 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3d:02.3 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3d:02.4 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3d:02.5 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3d:02.6 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3d:02.7 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:01.0 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:01.1 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:01.2 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:01.3 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:01.4 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:01.5 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:01.6 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:01.7 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:02.0 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:02.1 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:02.2 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:02.3 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:02.4 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:02.5 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:02.6 cannot be used 00:28:35.248 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:28:35.248 EAL: Requested device 0000:3f:02.7 cannot be used 00:28:35.248 [2024-07-25 11:10:42.242326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.542 [2024-07-25 11:10:42.535080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.801 [2024-07-25 11:10:42.886273] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:35.801 [2024-07-25 11:10:42.886310] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:36.060 11:10:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:36.060 11:10:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:28:36.060 11:10:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:36.060 11:10:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:36.320 BaseBdev1_malloc 00:28:36.320 11:10:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:36.577 [2024-07-25 11:10:43.568531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:36.577 [2024-07-25 11:10:43.568597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:36.577 [2024-07-25 11:10:43.568627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:28:36.577 [2024-07-25 11:10:43.568649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:36.577 [2024-07-25 11:10:43.571436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:36.577 [2024-07-25 11:10:43.571478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:36.577 BaseBdev1 00:28:36.577 11:10:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:28:36.577 11:10:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:36.835 BaseBdev2_malloc 00:28:36.835 11:10:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:37.094 [2024-07-25 11:10:44.067944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:37.094 [2024-07-25 11:10:44.067999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:37.094 [2024-07-25 11:10:44.068023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:28:37.094 [2024-07-25 11:10:44.068043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:37.094 [2024-07-25 11:10:44.070739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:37.094 [2024-07-25 11:10:44.070797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:37.094 BaseBdev2 00:28:37.094 11:10:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:37.354 spare_malloc 00:28:37.354 11:10:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:37.613 spare_delay 00:28:37.613 11:10:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:37.872 [2024-07-25 11:10:44.808291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:37.872 [2024-07-25 11:10:44.808361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:37.872 [2024-07-25 11:10:44.808389] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:28:37.872 [2024-07-25 11:10:44.808406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:37.872 [2024-07-25 11:10:44.811163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:37.872 [2024-07-25 11:10:44.811200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:37.872 spare 00:28:37.872 11:10:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:38.132 [2024-07-25 11:10:45.036933] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:38.132 [2024-07-25 11:10:45.039263] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:38.132 [2024-07-25 11:10:45.039467] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:38.132 [2024-07-25 11:10:45.039488] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:38.132 [2024-07-25 11:10:45.039863] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:28:38.132 [2024-07-25 11:10:45.040115] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:38.132 [2024-07-25 11:10:45.040131] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:38.132 [2024-07-25 11:10:45.040378] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:38.132 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:38.132 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:38.132 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:38.132 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:38.132 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:38.132 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:38.132 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:38.132 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:38.132 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:38.132 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:38.132 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.132 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.391 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.391 "name": "raid_bdev1", 00:28:38.391 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:38.391 "strip_size_kb": 0, 00:28:38.391 "state": "online", 00:28:38.391 "raid_level": "raid1", 00:28:38.391 "superblock": true, 00:28:38.391 "num_base_bdevs": 2, 00:28:38.391 "num_base_bdevs_discovered": 2, 00:28:38.391 "num_base_bdevs_operational": 2, 00:28:38.391 "base_bdevs_list": [ 00:28:38.391 { 00:28:38.391 "name": "BaseBdev1", 00:28:38.391 "uuid": "746326e9-a3f1-57c7-b0ce-199e3c9a7d73", 00:28:38.391 "is_configured": true, 00:28:38.391 "data_offset": 2048, 00:28:38.391 "data_size": 63488 00:28:38.391 }, 00:28:38.391 { 00:28:38.391 "name": "BaseBdev2", 00:28:38.391 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:38.391 "is_configured": true, 00:28:38.391 "data_offset": 2048, 00:28:38.391 "data_size": 63488 00:28:38.391 } 00:28:38.391 ] 00:28:38.391 }' 00:28:38.391 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.391 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:38.960 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:38.960 11:10:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:28:38.960 [2024-07-25 11:10:45.987898] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:38.960 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:28:38.960 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.960 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:39.219 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:28:39.219 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:28:39.219 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:39.219 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:39.479 [2024-07-25 11:10:46.352810] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:28:39.479 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:39.479 Zero copy mechanism will not be used. 00:28:39.479 Running I/O for 60 seconds... 00:28:39.479 [2024-07-25 11:10:46.446089] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:39.479 [2024-07-25 11:10:46.454009] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:28:39.479 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:39.479 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:39.479 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:39.479 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:39.479 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:39.479 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:39.479 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:39.479 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:39.479 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:39.479 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:39.479 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.479 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.738 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:39.738 "name": "raid_bdev1", 00:28:39.738 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:39.738 "strip_size_kb": 0, 00:28:39.738 "state": "online", 00:28:39.738 "raid_level": "raid1", 00:28:39.738 "superblock": true, 00:28:39.738 "num_base_bdevs": 2, 00:28:39.738 "num_base_bdevs_discovered": 1, 00:28:39.738 "num_base_bdevs_operational": 1, 00:28:39.738 "base_bdevs_list": [ 00:28:39.738 { 00:28:39.738 "name": null, 00:28:39.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.738 "is_configured": false, 00:28:39.738 "data_offset": 2048, 00:28:39.738 "data_size": 63488 00:28:39.738 }, 00:28:39.738 { 00:28:39.738 "name": "BaseBdev2", 00:28:39.738 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:39.738 "is_configured": true, 00:28:39.738 "data_offset": 2048, 00:28:39.738 "data_size": 63488 00:28:39.738 } 00:28:39.738 ] 00:28:39.738 }' 00:28:39.738 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:39.738 11:10:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:40.307 11:10:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:40.566 [2024-07-25 11:10:47.524161] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:40.566 11:10:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:40.566 [2024-07-25 11:10:47.602975] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:28:40.566 [2024-07-25 11:10:47.605364] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:40.826 [2024-07-25 11:10:47.722679] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:40.826 [2024-07-25 11:10:47.723210] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:40.826 [2024-07-25 11:10:47.925400] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:40.826 [2024-07-25 11:10:47.925623] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:41.395 [2024-07-25 11:10:48.376817] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:41.395 [2024-07-25 11:10:48.377028] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:41.654 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:41.654 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:41.655 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:41.655 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:41.655 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:41.655 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:41.655 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.914 [2024-07-25 11:10:48.824172] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:41.914 [2024-07-25 11:10:48.824383] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:41.914 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:41.914 "name": "raid_bdev1", 00:28:41.914 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:41.914 "strip_size_kb": 0, 00:28:41.914 "state": "online", 00:28:41.914 "raid_level": "raid1", 00:28:41.914 "superblock": true, 00:28:41.914 "num_base_bdevs": 2, 00:28:41.914 "num_base_bdevs_discovered": 2, 00:28:41.914 "num_base_bdevs_operational": 2, 00:28:41.914 "process": { 00:28:41.914 "type": "rebuild", 00:28:41.914 "target": "spare", 00:28:41.914 "progress": { 00:28:41.914 "blocks": 14336, 00:28:41.914 "percent": 22 00:28:41.914 } 00:28:41.914 }, 00:28:41.914 "base_bdevs_list": [ 00:28:41.914 { 00:28:41.914 "name": "spare", 00:28:41.914 "uuid": "50a4cdf5-73e9-5919-86e4-527caab5b1aa", 00:28:41.914 "is_configured": true, 00:28:41.914 "data_offset": 2048, 00:28:41.914 "data_size": 63488 00:28:41.914 }, 00:28:41.914 { 00:28:41.914 "name": "BaseBdev2", 00:28:41.914 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:41.914 "is_configured": true, 00:28:41.914 "data_offset": 2048, 00:28:41.914 "data_size": 63488 00:28:41.914 } 00:28:41.914 ] 00:28:41.914 }' 00:28:41.914 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:41.914 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:41.914 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:41.914 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:41.914 11:10:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:42.173 [2024-07-25 11:10:49.087011] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:42.173 [2024-07-25 11:10:49.139273] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:42.173 [2024-07-25 11:10:49.204487] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:42.432 [2024-07-25 11:10:49.322243] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:42.432 [2024-07-25 11:10:49.324020] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:42.432 [2024-07-25 11:10:49.324058] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:42.432 [2024-07-25 11:10:49.324073] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:42.432 [2024-07-25 11:10:49.387942] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000108b0 00:28:42.432 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:42.432 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:42.432 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:42.432 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:42.432 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:42.432 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:42.432 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:42.432 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:42.432 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:42.432 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:42.433 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.433 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.692 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:42.692 "name": "raid_bdev1", 00:28:42.692 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:42.692 "strip_size_kb": 0, 00:28:42.692 "state": "online", 00:28:42.692 "raid_level": "raid1", 00:28:42.692 "superblock": true, 00:28:42.692 "num_base_bdevs": 2, 00:28:42.692 "num_base_bdevs_discovered": 1, 00:28:42.692 "num_base_bdevs_operational": 1, 00:28:42.692 "base_bdevs_list": [ 00:28:42.692 { 00:28:42.692 "name": null, 00:28:42.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:42.692 "is_configured": false, 00:28:42.692 "data_offset": 2048, 00:28:42.692 "data_size": 63488 00:28:42.692 }, 00:28:42.692 { 00:28:42.692 "name": "BaseBdev2", 00:28:42.692 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:42.692 "is_configured": true, 00:28:42.692 "data_offset": 2048, 00:28:42.692 "data_size": 63488 00:28:42.692 } 00:28:42.692 ] 00:28:42.692 }' 00:28:42.692 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:42.692 11:10:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:43.261 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:43.261 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:43.261 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:43.261 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:43.261 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:43.261 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:43.261 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:43.521 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:43.521 "name": "raid_bdev1", 00:28:43.521 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:43.521 "strip_size_kb": 0, 00:28:43.521 "state": "online", 00:28:43.521 "raid_level": "raid1", 00:28:43.521 "superblock": true, 00:28:43.521 "num_base_bdevs": 2, 00:28:43.521 "num_base_bdevs_discovered": 1, 00:28:43.521 "num_base_bdevs_operational": 1, 00:28:43.521 "base_bdevs_list": [ 00:28:43.521 { 00:28:43.521 "name": null, 00:28:43.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:43.521 "is_configured": false, 00:28:43.521 "data_offset": 2048, 00:28:43.521 "data_size": 63488 00:28:43.521 }, 00:28:43.521 { 00:28:43.521 "name": "BaseBdev2", 00:28:43.521 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:43.521 "is_configured": true, 00:28:43.521 "data_offset": 2048, 00:28:43.521 "data_size": 63488 00:28:43.521 } 00:28:43.521 ] 00:28:43.521 }' 00:28:43.521 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:43.521 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:43.521 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:43.521 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:43.521 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:43.780 [2024-07-25 11:10:50.777838] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:43.780 11:10:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:28:43.780 [2024-07-25 11:10:50.846753] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:28:43.780 [2024-07-25 11:10:50.849068] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:44.039 [2024-07-25 11:10:50.969447] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:44.039 [2024-07-25 11:10:51.112734] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:44.039 [2024-07-25 11:10:51.112987] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:44.607 [2024-07-25 11:10:51.474758] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:44.607 [2024-07-25 11:10:51.710889] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:44.607 [2024-07-25 11:10:51.711158] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:44.867 11:10:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:44.867 11:10:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:44.867 11:10:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:44.867 11:10:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:44.867 11:10:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:44.867 11:10:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.867 11:10:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.867 [2024-07-25 11:10:51.941204] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:44.867 [2024-07-25 11:10:51.941704] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:45.126 [2024-07-25 11:10:52.061464] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:45.126 [2024-07-25 11:10:52.061653] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:45.126 "name": "raid_bdev1", 00:28:45.126 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:45.126 "strip_size_kb": 0, 00:28:45.126 "state": "online", 00:28:45.126 "raid_level": "raid1", 00:28:45.126 "superblock": true, 00:28:45.126 "num_base_bdevs": 2, 00:28:45.126 "num_base_bdevs_discovered": 2, 00:28:45.126 "num_base_bdevs_operational": 2, 00:28:45.126 "process": { 00:28:45.126 "type": "rebuild", 00:28:45.126 "target": "spare", 00:28:45.126 "progress": { 00:28:45.126 "blocks": 14336, 00:28:45.126 "percent": 22 00:28:45.126 } 00:28:45.126 }, 00:28:45.126 "base_bdevs_list": [ 00:28:45.126 { 00:28:45.126 "name": "spare", 00:28:45.126 "uuid": "50a4cdf5-73e9-5919-86e4-527caab5b1aa", 00:28:45.126 "is_configured": true, 00:28:45.126 "data_offset": 2048, 00:28:45.126 "data_size": 63488 00:28:45.126 }, 00:28:45.126 { 00:28:45.126 "name": "BaseBdev2", 00:28:45.126 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:45.126 "is_configured": true, 00:28:45.126 "data_offset": 2048, 00:28:45.126 "data_size": 63488 00:28:45.126 } 00:28:45.126 ] 00:28:45.126 }' 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:28:45.126 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=936 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.126 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.385 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:45.385 "name": "raid_bdev1", 00:28:45.385 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:45.385 "strip_size_kb": 0, 00:28:45.385 "state": "online", 00:28:45.385 "raid_level": "raid1", 00:28:45.386 "superblock": true, 00:28:45.386 "num_base_bdevs": 2, 00:28:45.386 "num_base_bdevs_discovered": 2, 00:28:45.386 "num_base_bdevs_operational": 2, 00:28:45.386 "process": { 00:28:45.386 "type": "rebuild", 00:28:45.386 "target": "spare", 00:28:45.386 "progress": { 00:28:45.386 "blocks": 18432, 00:28:45.386 "percent": 29 00:28:45.386 } 00:28:45.386 }, 00:28:45.386 "base_bdevs_list": [ 00:28:45.386 { 00:28:45.386 "name": "spare", 00:28:45.386 "uuid": "50a4cdf5-73e9-5919-86e4-527caab5b1aa", 00:28:45.386 "is_configured": true, 00:28:45.386 "data_offset": 2048, 00:28:45.386 "data_size": 63488 00:28:45.386 }, 00:28:45.386 { 00:28:45.386 "name": "BaseBdev2", 00:28:45.386 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:45.386 "is_configured": true, 00:28:45.386 "data_offset": 2048, 00:28:45.386 "data_size": 63488 00:28:45.386 } 00:28:45.386 ] 00:28:45.386 }' 00:28:45.386 [2024-07-25 11:10:52.391694] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:45.386 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:45.386 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:45.386 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:45.386 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:45.386 11:10:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:45.645 [2024-07-25 11:10:52.602077] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:45.904 [2024-07-25 11:10:52.923291] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:28:46.164 [2024-07-25 11:10:53.141470] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:28:46.422 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:46.422 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:46.422 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:46.422 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:46.422 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:46.422 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:46.422 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.422 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:46.422 [2024-07-25 11:10:53.507117] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:28:46.681 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:46.681 "name": "raid_bdev1", 00:28:46.681 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:46.681 "strip_size_kb": 0, 00:28:46.681 "state": "online", 00:28:46.681 "raid_level": "raid1", 00:28:46.681 "superblock": true, 00:28:46.681 "num_base_bdevs": 2, 00:28:46.681 "num_base_bdevs_discovered": 2, 00:28:46.681 "num_base_bdevs_operational": 2, 00:28:46.681 "process": { 00:28:46.681 "type": "rebuild", 00:28:46.681 "target": "spare", 00:28:46.681 "progress": { 00:28:46.681 "blocks": 36864, 00:28:46.681 "percent": 58 00:28:46.681 } 00:28:46.681 }, 00:28:46.681 "base_bdevs_list": [ 00:28:46.681 { 00:28:46.681 "name": "spare", 00:28:46.681 "uuid": "50a4cdf5-73e9-5919-86e4-527caab5b1aa", 00:28:46.681 "is_configured": true, 00:28:46.681 "data_offset": 2048, 00:28:46.681 "data_size": 63488 00:28:46.681 }, 00:28:46.681 { 00:28:46.681 "name": "BaseBdev2", 00:28:46.681 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:46.681 "is_configured": true, 00:28:46.681 "data_offset": 2048, 00:28:46.681 "data_size": 63488 00:28:46.681 } 00:28:46.681 ] 00:28:46.681 }' 00:28:46.681 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:46.681 [2024-07-25 11:10:53.728948] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:28:46.681 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:46.681 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:46.681 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:46.681 11:10:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:46.939 [2024-07-25 11:10:53.855902] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:28:46.939 [2024-07-25 11:10:53.856086] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:28:47.508 [2024-07-25 11:10:54.330041] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:28:47.767 11:10:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:47.767 11:10:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:47.767 11:10:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:47.767 11:10:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:47.767 11:10:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:47.767 11:10:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:47.767 11:10:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.767 11:10:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.026 11:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:48.026 "name": "raid_bdev1", 00:28:48.026 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:48.026 "strip_size_kb": 0, 00:28:48.026 "state": "online", 00:28:48.026 "raid_level": "raid1", 00:28:48.026 "superblock": true, 00:28:48.026 "num_base_bdevs": 2, 00:28:48.026 "num_base_bdevs_discovered": 2, 00:28:48.026 "num_base_bdevs_operational": 2, 00:28:48.026 "process": { 00:28:48.026 "type": "rebuild", 00:28:48.026 "target": "spare", 00:28:48.026 "progress": { 00:28:48.026 "blocks": 57344, 00:28:48.026 "percent": 90 00:28:48.026 } 00:28:48.026 }, 00:28:48.026 "base_bdevs_list": [ 00:28:48.026 { 00:28:48.026 "name": "spare", 00:28:48.026 "uuid": "50a4cdf5-73e9-5919-86e4-527caab5b1aa", 00:28:48.026 "is_configured": true, 00:28:48.026 "data_offset": 2048, 00:28:48.026 "data_size": 63488 00:28:48.026 }, 00:28:48.026 { 00:28:48.026 "name": "BaseBdev2", 00:28:48.026 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:48.026 "is_configured": true, 00:28:48.026 "data_offset": 2048, 00:28:48.026 "data_size": 63488 00:28:48.026 } 00:28:48.026 ] 00:28:48.026 }' 00:28:48.026 11:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:48.026 11:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:48.026 11:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:48.026 11:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:48.026 11:10:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:28:48.284 [2024-07-25 11:10:55.229801] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:48.284 [2024-07-25 11:10:55.337744] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:48.284 [2024-07-25 11:10:55.339347] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:49.250 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:28:49.250 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:49.250 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:49.250 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:49.250 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:49.250 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:49.250 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.250 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.509 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:49.509 "name": "raid_bdev1", 00:28:49.509 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:49.509 "strip_size_kb": 0, 00:28:49.509 "state": "online", 00:28:49.509 "raid_level": "raid1", 00:28:49.509 "superblock": true, 00:28:49.509 "num_base_bdevs": 2, 00:28:49.509 "num_base_bdevs_discovered": 2, 00:28:49.509 "num_base_bdevs_operational": 2, 00:28:49.509 "base_bdevs_list": [ 00:28:49.509 { 00:28:49.509 "name": "spare", 00:28:49.509 "uuid": "50a4cdf5-73e9-5919-86e4-527caab5b1aa", 00:28:49.509 "is_configured": true, 00:28:49.509 "data_offset": 2048, 00:28:49.509 "data_size": 63488 00:28:49.509 }, 00:28:49.509 { 00:28:49.509 "name": "BaseBdev2", 00:28:49.509 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:49.509 "is_configured": true, 00:28:49.509 "data_offset": 2048, 00:28:49.509 "data_size": 63488 00:28:49.509 } 00:28:49.509 ] 00:28:49.509 }' 00:28:49.509 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:49.509 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:49.509 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:49.509 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:49.509 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:28:49.509 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:49.509 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:49.509 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:49.509 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:49.509 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:49.509 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.510 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:49.769 "name": "raid_bdev1", 00:28:49.769 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:49.769 "strip_size_kb": 0, 00:28:49.769 "state": "online", 00:28:49.769 "raid_level": "raid1", 00:28:49.769 "superblock": true, 00:28:49.769 "num_base_bdevs": 2, 00:28:49.769 "num_base_bdevs_discovered": 2, 00:28:49.769 "num_base_bdevs_operational": 2, 00:28:49.769 "base_bdevs_list": [ 00:28:49.769 { 00:28:49.769 "name": "spare", 00:28:49.769 "uuid": "50a4cdf5-73e9-5919-86e4-527caab5b1aa", 00:28:49.769 "is_configured": true, 00:28:49.769 "data_offset": 2048, 00:28:49.769 "data_size": 63488 00:28:49.769 }, 00:28:49.769 { 00:28:49.769 "name": "BaseBdev2", 00:28:49.769 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:49.769 "is_configured": true, 00:28:49.769 "data_offset": 2048, 00:28:49.769 "data_size": 63488 00:28:49.769 } 00:28:49.769 ] 00:28:49.769 }' 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.769 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.028 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:50.028 "name": "raid_bdev1", 00:28:50.028 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:50.028 "strip_size_kb": 0, 00:28:50.028 "state": "online", 00:28:50.028 "raid_level": "raid1", 00:28:50.028 "superblock": true, 00:28:50.028 "num_base_bdevs": 2, 00:28:50.028 "num_base_bdevs_discovered": 2, 00:28:50.028 "num_base_bdevs_operational": 2, 00:28:50.028 "base_bdevs_list": [ 00:28:50.028 { 00:28:50.028 "name": "spare", 00:28:50.028 "uuid": "50a4cdf5-73e9-5919-86e4-527caab5b1aa", 00:28:50.028 "is_configured": true, 00:28:50.028 "data_offset": 2048, 00:28:50.028 "data_size": 63488 00:28:50.028 }, 00:28:50.028 { 00:28:50.028 "name": "BaseBdev2", 00:28:50.028 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:50.028 "is_configured": true, 00:28:50.028 "data_offset": 2048, 00:28:50.028 "data_size": 63488 00:28:50.028 } 00:28:50.028 ] 00:28:50.028 }' 00:28:50.028 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:50.028 11:10:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:50.596 11:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:50.855 [2024-07-25 11:10:57.759917] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:50.855 [2024-07-25 11:10:57.759952] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:50.855 00:28:50.855 Latency(us) 00:28:50.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.855 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:50.855 raid_bdev1 : 11.48 96.72 290.15 0.00 0.00 13758.32 330.96 117440.51 00:28:50.855 =================================================================================================================== 00:28:50.855 Total : 96.72 290.15 0.00 0.00 13758.32 330.96 117440.51 00:28:50.855 [2024-07-25 11:10:57.889629] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:50.855 [2024-07-25 11:10:57.889672] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:50.855 [2024-07-25 11:10:57.889767] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:50.855 [2024-07-25 11:10:57.889783] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:50.855 0 00:28:50.855 11:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:28:50.855 11:10:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:51.114 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:28:51.373 /dev/nbd0 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:51.373 1+0 records in 00:28:51.373 1+0 records out 00:28:51.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287474 s, 14.2 MB/s 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev2 ']' 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:51.373 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:28:51.632 /dev/nbd1 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:51.632 1+0 records in 00:28:51.632 1+0 records out 00:28:51.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263577 s, 15.5 MB/s 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:51.632 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:51.890 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:28:51.890 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:51.890 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:51.890 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:51.890 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:51.891 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:51.891 11:10:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:52.149 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:52.406 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:52.406 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:52.406 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:52.406 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:52.406 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:52.406 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:52.406 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:28:52.406 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:52.406 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:28:52.406 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:52.664 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:52.664 [2024-07-25 11:10:59.737718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:52.664 [2024-07-25 11:10:59.737771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:52.664 [2024-07-25 11:10:59.737800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:28:52.665 [2024-07-25 11:10:59.737815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:52.665 [2024-07-25 11:10:59.740591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:52.665 [2024-07-25 11:10:59.740624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:52.665 [2024-07-25 11:10:59.740729] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:52.665 [2024-07-25 11:10:59.740796] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:52.665 [2024-07-25 11:10:59.740980] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:52.665 spare 00:28:52.665 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:52.665 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:52.665 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:52.665 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:52.665 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:52.665 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:52.665 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:52.665 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:52.665 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:52.665 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:52.665 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.665 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.922 [2024-07-25 11:10:59.841319] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:28:52.922 [2024-07-25 11:10:59.841350] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:52.922 [2024-07-25 11:10:59.841671] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00001f930 00:28:52.922 [2024-07-25 11:10:59.841955] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:28:52.923 [2024-07-25 11:10:59.841972] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:28:52.923 [2024-07-25 11:10:59.842199] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:52.923 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:52.923 "name": "raid_bdev1", 00:28:52.923 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:52.923 "strip_size_kb": 0, 00:28:52.923 "state": "online", 00:28:52.923 "raid_level": "raid1", 00:28:52.923 "superblock": true, 00:28:52.923 "num_base_bdevs": 2, 00:28:52.923 "num_base_bdevs_discovered": 2, 00:28:52.923 "num_base_bdevs_operational": 2, 00:28:52.923 "base_bdevs_list": [ 00:28:52.923 { 00:28:52.923 "name": "spare", 00:28:52.923 "uuid": "50a4cdf5-73e9-5919-86e4-527caab5b1aa", 00:28:52.923 "is_configured": true, 00:28:52.923 "data_offset": 2048, 00:28:52.923 "data_size": 63488 00:28:52.923 }, 00:28:52.923 { 00:28:52.923 "name": "BaseBdev2", 00:28:52.923 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:52.923 "is_configured": true, 00:28:52.923 "data_offset": 2048, 00:28:52.923 "data_size": 63488 00:28:52.923 } 00:28:52.923 ] 00:28:52.923 }' 00:28:52.923 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:52.923 11:10:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:53.490 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:53.490 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:53.490 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:53.490 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:53.490 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.490 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.490 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.749 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:53.749 "name": "raid_bdev1", 00:28:53.749 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:53.749 "strip_size_kb": 0, 00:28:53.749 "state": "online", 00:28:53.749 "raid_level": "raid1", 00:28:53.749 "superblock": true, 00:28:53.749 "num_base_bdevs": 2, 00:28:53.749 "num_base_bdevs_discovered": 2, 00:28:53.749 "num_base_bdevs_operational": 2, 00:28:53.749 "base_bdevs_list": [ 00:28:53.749 { 00:28:53.749 "name": "spare", 00:28:53.749 "uuid": "50a4cdf5-73e9-5919-86e4-527caab5b1aa", 00:28:53.749 "is_configured": true, 00:28:53.749 "data_offset": 2048, 00:28:53.749 "data_size": 63488 00:28:53.749 }, 00:28:53.749 { 00:28:53.749 "name": "BaseBdev2", 00:28:53.749 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:53.749 "is_configured": true, 00:28:53.749 "data_offset": 2048, 00:28:53.749 "data_size": 63488 00:28:53.749 } 00:28:53.749 ] 00:28:53.749 }' 00:28:53.749 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:53.749 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:53.749 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.749 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:53.749 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.749 11:11:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:54.008 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:28:54.008 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:54.267 [2024-07-25 11:11:01.274301] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:54.267 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:54.267 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:54.267 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:54.267 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:54.267 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:54.267 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:54.267 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:54.267 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:54.267 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:54.267 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:54.267 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:54.267 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.526 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:54.526 "name": "raid_bdev1", 00:28:54.526 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:54.526 "strip_size_kb": 0, 00:28:54.526 "state": "online", 00:28:54.526 "raid_level": "raid1", 00:28:54.526 "superblock": true, 00:28:54.526 "num_base_bdevs": 2, 00:28:54.526 "num_base_bdevs_discovered": 1, 00:28:54.526 "num_base_bdevs_operational": 1, 00:28:54.526 "base_bdevs_list": [ 00:28:54.526 { 00:28:54.526 "name": null, 00:28:54.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.526 "is_configured": false, 00:28:54.526 "data_offset": 2048, 00:28:54.526 "data_size": 63488 00:28:54.526 }, 00:28:54.526 { 00:28:54.526 "name": "BaseBdev2", 00:28:54.526 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:54.526 "is_configured": true, 00:28:54.526 "data_offset": 2048, 00:28:54.526 "data_size": 63488 00:28:54.526 } 00:28:54.526 ] 00:28:54.526 }' 00:28:54.526 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:54.526 11:11:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:55.094 11:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:55.353 [2024-07-25 11:11:02.369471] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:55.353 [2024-07-25 11:11:02.369663] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:55.353 [2024-07-25 11:11:02.369689] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:55.353 [2024-07-25 11:11:02.369734] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:55.353 [2024-07-25 11:11:02.394704] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00001fa00 00:28:55.353 [2024-07-25 11:11:02.397034] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:55.353 11:11:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:56.732 "name": "raid_bdev1", 00:28:56.732 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:56.732 "strip_size_kb": 0, 00:28:56.732 "state": "online", 00:28:56.732 "raid_level": "raid1", 00:28:56.732 "superblock": true, 00:28:56.732 "num_base_bdevs": 2, 00:28:56.732 "num_base_bdevs_discovered": 2, 00:28:56.732 "num_base_bdevs_operational": 2, 00:28:56.732 "process": { 00:28:56.732 "type": "rebuild", 00:28:56.732 "target": "spare", 00:28:56.732 "progress": { 00:28:56.732 "blocks": 24576, 00:28:56.732 "percent": 38 00:28:56.732 } 00:28:56.732 }, 00:28:56.732 "base_bdevs_list": [ 00:28:56.732 { 00:28:56.732 "name": "spare", 00:28:56.732 "uuid": "50a4cdf5-73e9-5919-86e4-527caab5b1aa", 00:28:56.732 "is_configured": true, 00:28:56.732 "data_offset": 2048, 00:28:56.732 "data_size": 63488 00:28:56.732 }, 00:28:56.732 { 00:28:56.732 "name": "BaseBdev2", 00:28:56.732 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:56.732 "is_configured": true, 00:28:56.732 "data_offset": 2048, 00:28:56.732 "data_size": 63488 00:28:56.732 } 00:28:56.732 ] 00:28:56.732 }' 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:56.732 11:11:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:56.991 [2024-07-25 11:11:03.914259] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:56.991 [2024-07-25 11:11:04.010345] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:56.991 [2024-07-25 11:11:04.010427] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:56.991 [2024-07-25 11:11:04.010450] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:56.991 [2024-07-25 11:11:04.010464] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:56.991 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:56.991 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:56.991 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:56.991 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:56.991 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:56.991 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:56.991 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:56.991 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:56.991 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:56.991 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:56.991 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.991 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.250 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:57.250 "name": "raid_bdev1", 00:28:57.250 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:57.250 "strip_size_kb": 0, 00:28:57.250 "state": "online", 00:28:57.250 "raid_level": "raid1", 00:28:57.250 "superblock": true, 00:28:57.250 "num_base_bdevs": 2, 00:28:57.250 "num_base_bdevs_discovered": 1, 00:28:57.250 "num_base_bdevs_operational": 1, 00:28:57.250 "base_bdevs_list": [ 00:28:57.250 { 00:28:57.250 "name": null, 00:28:57.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.250 "is_configured": false, 00:28:57.250 "data_offset": 2048, 00:28:57.250 "data_size": 63488 00:28:57.250 }, 00:28:57.250 { 00:28:57.250 "name": "BaseBdev2", 00:28:57.250 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:57.250 "is_configured": true, 00:28:57.250 "data_offset": 2048, 00:28:57.250 "data_size": 63488 00:28:57.250 } 00:28:57.250 ] 00:28:57.250 }' 00:28:57.250 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:57.250 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.818 11:11:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:58.077 [2024-07-25 11:11:05.092035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:58.077 [2024-07-25 11:11:05.092100] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:58.077 [2024-07-25 11:11:05.092127] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043b80 00:28:58.077 [2024-07-25 11:11:05.092153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:58.077 [2024-07-25 11:11:05.092732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:58.077 [2024-07-25 11:11:05.092762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:58.077 [2024-07-25 11:11:05.092867] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:58.077 [2024-07-25 11:11:05.092889] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:58.077 [2024-07-25 11:11:05.092904] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:58.077 [2024-07-25 11:11:05.092942] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:58.077 [2024-07-25 11:11:05.118725] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00001fad0 00:28:58.077 spare 00:28:58.077 [2024-07-25 11:11:05.121073] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:58.077 11:11:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:28:59.455 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:59.455 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:59.455 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:59.455 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:59.455 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:59.455 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.455 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.455 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:59.455 "name": "raid_bdev1", 00:28:59.455 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:59.455 "strip_size_kb": 0, 00:28:59.455 "state": "online", 00:28:59.455 "raid_level": "raid1", 00:28:59.455 "superblock": true, 00:28:59.455 "num_base_bdevs": 2, 00:28:59.455 "num_base_bdevs_discovered": 2, 00:28:59.455 "num_base_bdevs_operational": 2, 00:28:59.455 "process": { 00:28:59.455 "type": "rebuild", 00:28:59.455 "target": "spare", 00:28:59.456 "progress": { 00:28:59.456 "blocks": 24576, 00:28:59.456 "percent": 38 00:28:59.456 } 00:28:59.456 }, 00:28:59.456 "base_bdevs_list": [ 00:28:59.456 { 00:28:59.456 "name": "spare", 00:28:59.456 "uuid": "50a4cdf5-73e9-5919-86e4-527caab5b1aa", 00:28:59.456 "is_configured": true, 00:28:59.456 "data_offset": 2048, 00:28:59.456 "data_size": 63488 00:28:59.456 }, 00:28:59.456 { 00:28:59.456 "name": "BaseBdev2", 00:28:59.456 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:59.456 "is_configured": true, 00:28:59.456 "data_offset": 2048, 00:28:59.456 "data_size": 63488 00:28:59.456 } 00:28:59.456 ] 00:28:59.456 }' 00:28:59.456 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:59.456 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:59.456 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:59.456 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:59.456 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:59.715 [2024-07-25 11:11:06.660110] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:59.715 [2024-07-25 11:11:06.734219] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:59.715 [2024-07-25 11:11:06.734283] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:59.715 [2024-07-25 11:11:06.734306] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:59.715 [2024-07-25 11:11:06.734318] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:59.715 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:59.715 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:59.715 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:59.715 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:59.715 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:59.715 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:59.715 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:59.715 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:59.715 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:59.715 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:59.715 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.715 11:11:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.974 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:59.975 "name": "raid_bdev1", 00:28:59.975 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:28:59.975 "strip_size_kb": 0, 00:28:59.975 "state": "online", 00:28:59.975 "raid_level": "raid1", 00:28:59.975 "superblock": true, 00:28:59.975 "num_base_bdevs": 2, 00:28:59.975 "num_base_bdevs_discovered": 1, 00:28:59.975 "num_base_bdevs_operational": 1, 00:28:59.975 "base_bdevs_list": [ 00:28:59.975 { 00:28:59.975 "name": null, 00:28:59.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.975 "is_configured": false, 00:28:59.975 "data_offset": 2048, 00:28:59.975 "data_size": 63488 00:28:59.975 }, 00:28:59.975 { 00:28:59.975 "name": "BaseBdev2", 00:28:59.975 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:28:59.975 "is_configured": true, 00:28:59.975 "data_offset": 2048, 00:28:59.975 "data_size": 63488 00:28:59.975 } 00:28:59.975 ] 00:28:59.975 }' 00:28:59.975 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:59.975 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:00.543 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:00.543 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:00.543 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:00.543 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:00.543 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:00.543 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.543 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.801 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:00.801 "name": "raid_bdev1", 00:29:00.801 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:29:00.801 "strip_size_kb": 0, 00:29:00.801 "state": "online", 00:29:00.801 "raid_level": "raid1", 00:29:00.801 "superblock": true, 00:29:00.801 "num_base_bdevs": 2, 00:29:00.801 "num_base_bdevs_discovered": 1, 00:29:00.801 "num_base_bdevs_operational": 1, 00:29:00.801 "base_bdevs_list": [ 00:29:00.801 { 00:29:00.801 "name": null, 00:29:00.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.801 "is_configured": false, 00:29:00.801 "data_offset": 2048, 00:29:00.801 "data_size": 63488 00:29:00.801 }, 00:29:00.801 { 00:29:00.801 "name": "BaseBdev2", 00:29:00.801 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:29:00.801 "is_configured": true, 00:29:00.801 "data_offset": 2048, 00:29:00.801 "data_size": 63488 00:29:00.801 } 00:29:00.801 ] 00:29:00.801 }' 00:29:00.801 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:00.801 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:00.801 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:01.060 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:01.060 11:11:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:01.061 11:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:01.319 [2024-07-25 11:11:08.374471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:01.319 [2024-07-25 11:11:08.374535] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:01.319 [2024-07-25 11:11:08.374564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044180 00:29:01.319 [2024-07-25 11:11:08.374579] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:01.319 [2024-07-25 11:11:08.375159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:01.319 [2024-07-25 11:11:08.375187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:01.319 [2024-07-25 11:11:08.375284] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:01.319 [2024-07-25 11:11:08.375303] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:01.319 [2024-07-25 11:11:08.375319] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:01.319 BaseBdev1 00:29:01.319 11:11:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:29:02.697 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:02.697 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:02.697 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:02.697 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:02.697 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:02.697 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:02.697 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:02.697 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:02.697 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:02.697 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:02.697 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.698 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.698 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:02.698 "name": "raid_bdev1", 00:29:02.698 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:29:02.698 "strip_size_kb": 0, 00:29:02.698 "state": "online", 00:29:02.698 "raid_level": "raid1", 00:29:02.698 "superblock": true, 00:29:02.698 "num_base_bdevs": 2, 00:29:02.698 "num_base_bdevs_discovered": 1, 00:29:02.698 "num_base_bdevs_operational": 1, 00:29:02.698 "base_bdevs_list": [ 00:29:02.698 { 00:29:02.698 "name": null, 00:29:02.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.698 "is_configured": false, 00:29:02.698 "data_offset": 2048, 00:29:02.698 "data_size": 63488 00:29:02.698 }, 00:29:02.698 { 00:29:02.698 "name": "BaseBdev2", 00:29:02.698 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:29:02.698 "is_configured": true, 00:29:02.698 "data_offset": 2048, 00:29:02.698 "data_size": 63488 00:29:02.698 } 00:29:02.698 ] 00:29:02.698 }' 00:29:02.698 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:02.698 11:11:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:03.265 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:03.265 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:03.265 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:03.265 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:03.265 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:03.266 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.266 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:03.525 "name": "raid_bdev1", 00:29:03.525 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:29:03.525 "strip_size_kb": 0, 00:29:03.525 "state": "online", 00:29:03.525 "raid_level": "raid1", 00:29:03.525 "superblock": true, 00:29:03.525 "num_base_bdevs": 2, 00:29:03.525 "num_base_bdevs_discovered": 1, 00:29:03.525 "num_base_bdevs_operational": 1, 00:29:03.525 "base_bdevs_list": [ 00:29:03.525 { 00:29:03.525 "name": null, 00:29:03.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.525 "is_configured": false, 00:29:03.525 "data_offset": 2048, 00:29:03.525 "data_size": 63488 00:29:03.525 }, 00:29:03.525 { 00:29:03.525 "name": "BaseBdev2", 00:29:03.525 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:29:03.525 "is_configured": true, 00:29:03.525 "data_offset": 2048, 00:29:03.525 "data_size": 63488 00:29:03.525 } 00:29:03.525 ] 00:29:03.525 }' 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:29:03.525 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:03.785 [2024-07-25 11:11:10.717252] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:03.785 [2024-07-25 11:11:10.717411] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:03.785 [2024-07-25 11:11:10.717431] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:03.785 request: 00:29:03.785 { 00:29:03.785 "base_bdev": "BaseBdev1", 00:29:03.785 "raid_bdev": "raid_bdev1", 00:29:03.785 "method": "bdev_raid_add_base_bdev", 00:29:03.785 "req_id": 1 00:29:03.785 } 00:29:03.785 Got JSON-RPC error response 00:29:03.785 response: 00:29:03.785 { 00:29:03.785 "code": -22, 00:29:03.785 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:03.785 } 00:29:03.785 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:29:03.785 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:03.785 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:03.785 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:03.785 11:11:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:29:04.767 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:04.767 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:04.767 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:04.767 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:04.767 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:04.767 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:04.767 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:04.767 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:04.767 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:04.767 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:04.767 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.767 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.026 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:05.026 "name": "raid_bdev1", 00:29:05.026 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:29:05.026 "strip_size_kb": 0, 00:29:05.026 "state": "online", 00:29:05.026 "raid_level": "raid1", 00:29:05.026 "superblock": true, 00:29:05.026 "num_base_bdevs": 2, 00:29:05.027 "num_base_bdevs_discovered": 1, 00:29:05.027 "num_base_bdevs_operational": 1, 00:29:05.027 "base_bdevs_list": [ 00:29:05.027 { 00:29:05.027 "name": null, 00:29:05.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.027 "is_configured": false, 00:29:05.027 "data_offset": 2048, 00:29:05.027 "data_size": 63488 00:29:05.027 }, 00:29:05.027 { 00:29:05.027 "name": "BaseBdev2", 00:29:05.027 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:29:05.027 "is_configured": true, 00:29:05.027 "data_offset": 2048, 00:29:05.027 "data_size": 63488 00:29:05.027 } 00:29:05.027 ] 00:29:05.027 }' 00:29:05.027 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:05.027 11:11:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:05.595 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:05.595 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:05.595 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:05.595 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:05.595 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:05.595 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.595 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.595 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:05.595 "name": "raid_bdev1", 00:29:05.595 "uuid": "733d8fee-1bdd-458f-9147-86026e0fe2e5", 00:29:05.595 "strip_size_kb": 0, 00:29:05.595 "state": "online", 00:29:05.595 "raid_level": "raid1", 00:29:05.595 "superblock": true, 00:29:05.595 "num_base_bdevs": 2, 00:29:05.595 "num_base_bdevs_discovered": 1, 00:29:05.595 "num_base_bdevs_operational": 1, 00:29:05.595 "base_bdevs_list": [ 00:29:05.595 { 00:29:05.595 "name": null, 00:29:05.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.595 "is_configured": false, 00:29:05.595 "data_offset": 2048, 00:29:05.595 "data_size": 63488 00:29:05.595 }, 00:29:05.595 { 00:29:05.595 "name": "BaseBdev2", 00:29:05.595 "uuid": "f662dcf3-4985-5be8-9300-7a2e3a29eb09", 00:29:05.595 "is_configured": true, 00:29:05.595 "data_offset": 2048, 00:29:05.595 "data_size": 63488 00:29:05.595 } 00:29:05.595 ] 00:29:05.595 }' 00:29:05.595 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 3709403 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 3709403 ']' 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 3709403 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3709403 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3709403' 00:29:05.855 killing process with pid 3709403 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 3709403 00:29:05.855 Received shutdown signal, test time was about 26.392917 seconds 00:29:05.855 00:29:05.855 Latency(us) 00:29:05.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.855 =================================================================================================================== 00:29:05.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.855 [2024-07-25 11:11:12.812525] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:05.855 [2024-07-25 11:11:12.812663] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:05.855 11:11:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 3709403 00:29:05.855 [2024-07-25 11:11:12.812729] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:05.855 [2024-07-25 11:11:12.812745] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:29:06.115 [2024-07-25 11:11:13.047067] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:29:08.027 00:29:08.027 real 0m32.945s 00:29:08.027 user 0m49.378s 00:29:08.027 sys 0m4.485s 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:08.027 ************************************ 00:29:08.027 END TEST raid_rebuild_test_sb_io 00:29:08.027 ************************************ 00:29:08.027 11:11:14 bdev_raid -- bdev/bdev_raid.sh@956 -- # for n in 2 4 00:29:08.027 11:11:14 bdev_raid -- bdev/bdev_raid.sh@957 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:29:08.027 11:11:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:08.027 11:11:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:08.027 11:11:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:08.027 ************************************ 00:29:08.027 START TEST raid_rebuild_test 00:29:08.027 ************************************ 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # raid_pid=3715362 00:29:08.027 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # waitforlisten 3715362 /var/tmp/spdk-raid.sock 00:29:08.028 11:11:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:08.028 11:11:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 3715362 ']' 00:29:08.028 11:11:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:08.028 11:11:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:08.028 11:11:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:08.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:08.028 11:11:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:08.028 11:11:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.028 [2024-07-25 11:11:15.047926] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:08.028 [2024-07-25 11:11:15.048047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3715362 ] 00:29:08.028 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:08.028 Zero copy mechanism will not be used. 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:08.287 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:08.287 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:08.287 [2024-07-25 11:11:15.271767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.547 [2024-07-25 11:11:15.549737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.806 [2024-07-25 11:11:15.879901] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:08.806 [2024-07-25 11:11:15.879938] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:09.065 11:11:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:09.065 11:11:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:29:09.065 11:11:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:09.065 11:11:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:09.324 BaseBdev1_malloc 00:29:09.325 11:11:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:09.584 [2024-07-25 11:11:16.543613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:09.584 [2024-07-25 11:11:16.543676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:09.584 [2024-07-25 11:11:16.543705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:29:09.584 [2024-07-25 11:11:16.543724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:09.584 [2024-07-25 11:11:16.546448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:09.584 [2024-07-25 11:11:16.546488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:09.584 BaseBdev1 00:29:09.584 11:11:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:09.584 11:11:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:09.843 BaseBdev2_malloc 00:29:09.843 11:11:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:10.102 [2024-07-25 11:11:17.039927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:10.102 [2024-07-25 11:11:17.039992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:10.102 [2024-07-25 11:11:17.040019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:29:10.102 [2024-07-25 11:11:17.040040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:10.102 [2024-07-25 11:11:17.042798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:10.102 [2024-07-25 11:11:17.042836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:10.102 BaseBdev2 00:29:10.102 11:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:10.102 11:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:10.361 BaseBdev3_malloc 00:29:10.361 11:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:10.621 [2024-07-25 11:11:17.532575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:10.621 [2024-07-25 11:11:17.532647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:10.621 [2024-07-25 11:11:17.532676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:29:10.621 [2024-07-25 11:11:17.532695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:10.621 [2024-07-25 11:11:17.535463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:10.621 [2024-07-25 11:11:17.535501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:10.621 BaseBdev3 00:29:10.621 11:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:10.621 11:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:10.880 BaseBdev4_malloc 00:29:10.880 11:11:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:11.140 [2024-07-25 11:11:18.030391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:11.140 [2024-07-25 11:11:18.030465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.140 [2024-07-25 11:11:18.030492] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:29:11.140 [2024-07-25 11:11:18.030509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.140 [2024-07-25 11:11:18.033289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.140 [2024-07-25 11:11:18.033328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:11.140 BaseBdev4 00:29:11.140 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:11.399 spare_malloc 00:29:11.399 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:11.659 spare_delay 00:29:11.659 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:11.659 [2024-07-25 11:11:18.739970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:11.659 [2024-07-25 11:11:18.740035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.659 [2024-07-25 11:11:18.740062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:29:11.659 [2024-07-25 11:11:18.740080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.659 [2024-07-25 11:11:18.742861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.659 [2024-07-25 11:11:18.742899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:11.659 spare 00:29:11.659 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:11.918 [2024-07-25 11:11:18.968613] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:11.918 [2024-07-25 11:11:18.970920] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:11.918 [2024-07-25 11:11:18.970992] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:11.918 [2024-07-25 11:11:18.971060] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:11.918 [2024-07-25 11:11:18.971221] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:11.918 [2024-07-25 11:11:18.971242] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:11.918 [2024-07-25 11:11:18.971618] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:29:11.918 [2024-07-25 11:11:18.971881] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:11.918 [2024-07-25 11:11:18.971899] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:11.918 [2024-07-25 11:11:18.972113] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:11.918 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:11.918 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:11.918 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:11.918 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:11.918 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:11.918 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:11.918 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:11.918 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:11.918 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:11.918 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:11.918 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.918 11:11:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.487 11:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:12.487 "name": "raid_bdev1", 00:29:12.487 "uuid": "2ac6be9b-b18c-4fa0-b874-496b0b489291", 00:29:12.487 "strip_size_kb": 0, 00:29:12.487 "state": "online", 00:29:12.487 "raid_level": "raid1", 00:29:12.487 "superblock": false, 00:29:12.487 "num_base_bdevs": 4, 00:29:12.487 "num_base_bdevs_discovered": 4, 00:29:12.487 "num_base_bdevs_operational": 4, 00:29:12.487 "base_bdevs_list": [ 00:29:12.487 { 00:29:12.487 "name": "BaseBdev1", 00:29:12.487 "uuid": "1eb48727-b49d-5257-8862-da0d73a515e6", 00:29:12.487 "is_configured": true, 00:29:12.487 "data_offset": 0, 00:29:12.487 "data_size": 65536 00:29:12.487 }, 00:29:12.487 { 00:29:12.487 "name": "BaseBdev2", 00:29:12.487 "uuid": "ceda9d4f-cd12-5c6e-9080-c0b8ad9180f7", 00:29:12.487 "is_configured": true, 00:29:12.487 "data_offset": 0, 00:29:12.487 "data_size": 65536 00:29:12.487 }, 00:29:12.487 { 00:29:12.487 "name": "BaseBdev3", 00:29:12.487 "uuid": "42df8338-c0be-5b98-b85b-10dfed0e4061", 00:29:12.487 "is_configured": true, 00:29:12.487 "data_offset": 0, 00:29:12.487 "data_size": 65536 00:29:12.487 }, 00:29:12.487 { 00:29:12.487 "name": "BaseBdev4", 00:29:12.487 "uuid": "e0c89059-4e43-520f-a2ac-c32b43f0e7af", 00:29:12.487 "is_configured": true, 00:29:12.487 "data_offset": 0, 00:29:12.487 "data_size": 65536 00:29:12.487 } 00:29:12.487 ] 00:29:12.487 }' 00:29:12.487 11:11:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:12.487 11:11:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.055 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:13.055 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:29:13.314 [2024-07-25 11:11:20.304594] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:13.314 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:29:13.314 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:13.314 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:13.573 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:13.833 [2024-07-25 11:11:20.761471] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:29:13.833 /dev/nbd0 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:13.833 1+0 records in 00:29:13.833 1+0 records out 00:29:13.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270008 s, 15.2 MB/s 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:29:13.833 11:11:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:29:20.399 65536+0 records in 00:29:20.399 65536+0 records out 00:29:20.399 33554432 bytes (34 MB, 32 MiB) copied, 6.56178 s, 5.1 MB/s 00:29:20.399 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:20.399 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:20.399 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:20.399 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:20.399 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:20.399 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:20.399 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:20.658 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:20.658 [2024-07-25 11:11:27.631547] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:20.658 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:20.658 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:20.658 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:20.658 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:20.658 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:20.658 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:20.658 11:11:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:20.658 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:20.918 [2024-07-25 11:11:27.848251] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:20.918 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:20.918 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:20.918 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:20.918 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:20.918 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:20.918 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:20.918 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:20.918 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:20.918 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:20.918 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:20.918 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:20.918 11:11:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.177 11:11:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:21.177 "name": "raid_bdev1", 00:29:21.177 "uuid": "2ac6be9b-b18c-4fa0-b874-496b0b489291", 00:29:21.177 "strip_size_kb": 0, 00:29:21.177 "state": "online", 00:29:21.177 "raid_level": "raid1", 00:29:21.177 "superblock": false, 00:29:21.177 "num_base_bdevs": 4, 00:29:21.177 "num_base_bdevs_discovered": 3, 00:29:21.177 "num_base_bdevs_operational": 3, 00:29:21.177 "base_bdevs_list": [ 00:29:21.177 { 00:29:21.177 "name": null, 00:29:21.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.177 "is_configured": false, 00:29:21.177 "data_offset": 0, 00:29:21.177 "data_size": 65536 00:29:21.177 }, 00:29:21.177 { 00:29:21.177 "name": "BaseBdev2", 00:29:21.177 "uuid": "ceda9d4f-cd12-5c6e-9080-c0b8ad9180f7", 00:29:21.177 "is_configured": true, 00:29:21.177 "data_offset": 0, 00:29:21.177 "data_size": 65536 00:29:21.177 }, 00:29:21.177 { 00:29:21.177 "name": "BaseBdev3", 00:29:21.177 "uuid": "42df8338-c0be-5b98-b85b-10dfed0e4061", 00:29:21.177 "is_configured": true, 00:29:21.177 "data_offset": 0, 00:29:21.177 "data_size": 65536 00:29:21.177 }, 00:29:21.177 { 00:29:21.177 "name": "BaseBdev4", 00:29:21.177 "uuid": "e0c89059-4e43-520f-a2ac-c32b43f0e7af", 00:29:21.177 "is_configured": true, 00:29:21.177 "data_offset": 0, 00:29:21.177 "data_size": 65536 00:29:21.177 } 00:29:21.177 ] 00:29:21.177 }' 00:29:21.177 11:11:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:21.177 11:11:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.745 11:11:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:22.004 [2024-07-25 11:11:28.879058] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:22.004 [2024-07-25 11:11:28.903923] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d145a0 00:29:22.004 [2024-07-25 11:11:28.906294] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:22.004 11:11:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:22.941 11:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:22.941 11:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:22.941 11:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:22.941 11:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:22.941 11:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:22.941 11:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.941 11:11:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.200 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:23.200 "name": "raid_bdev1", 00:29:23.201 "uuid": "2ac6be9b-b18c-4fa0-b874-496b0b489291", 00:29:23.201 "strip_size_kb": 0, 00:29:23.201 "state": "online", 00:29:23.201 "raid_level": "raid1", 00:29:23.201 "superblock": false, 00:29:23.201 "num_base_bdevs": 4, 00:29:23.201 "num_base_bdevs_discovered": 4, 00:29:23.201 "num_base_bdevs_operational": 4, 00:29:23.201 "process": { 00:29:23.201 "type": "rebuild", 00:29:23.201 "target": "spare", 00:29:23.201 "progress": { 00:29:23.201 "blocks": 24576, 00:29:23.201 "percent": 37 00:29:23.201 } 00:29:23.201 }, 00:29:23.201 "base_bdevs_list": [ 00:29:23.201 { 00:29:23.201 "name": "spare", 00:29:23.201 "uuid": "a60ad944-f4c3-502b-87da-8045ece9d2c4", 00:29:23.201 "is_configured": true, 00:29:23.201 "data_offset": 0, 00:29:23.201 "data_size": 65536 00:29:23.201 }, 00:29:23.201 { 00:29:23.201 "name": "BaseBdev2", 00:29:23.201 "uuid": "ceda9d4f-cd12-5c6e-9080-c0b8ad9180f7", 00:29:23.201 "is_configured": true, 00:29:23.201 "data_offset": 0, 00:29:23.201 "data_size": 65536 00:29:23.201 }, 00:29:23.201 { 00:29:23.201 "name": "BaseBdev3", 00:29:23.201 "uuid": "42df8338-c0be-5b98-b85b-10dfed0e4061", 00:29:23.201 "is_configured": true, 00:29:23.201 "data_offset": 0, 00:29:23.201 "data_size": 65536 00:29:23.201 }, 00:29:23.201 { 00:29:23.201 "name": "BaseBdev4", 00:29:23.201 "uuid": "e0c89059-4e43-520f-a2ac-c32b43f0e7af", 00:29:23.201 "is_configured": true, 00:29:23.201 "data_offset": 0, 00:29:23.201 "data_size": 65536 00:29:23.201 } 00:29:23.201 ] 00:29:23.201 }' 00:29:23.201 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:23.201 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:23.201 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:23.201 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:23.201 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:23.489 [2024-07-25 11:11:30.443692] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:23.489 [2024-07-25 11:11:30.519346] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:23.489 [2024-07-25 11:11:30.519410] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:23.489 [2024-07-25 11:11:30.519433] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:23.489 [2024-07-25 11:11:30.519449] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:23.489 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:23.489 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:23.489 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:23.489 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:23.489 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:23.489 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:23.489 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:23.489 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:23.489 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:23.489 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:23.489 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.489 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.747 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:23.747 "name": "raid_bdev1", 00:29:23.747 "uuid": "2ac6be9b-b18c-4fa0-b874-496b0b489291", 00:29:23.747 "strip_size_kb": 0, 00:29:23.747 "state": "online", 00:29:23.748 "raid_level": "raid1", 00:29:23.748 "superblock": false, 00:29:23.748 "num_base_bdevs": 4, 00:29:23.748 "num_base_bdevs_discovered": 3, 00:29:23.748 "num_base_bdevs_operational": 3, 00:29:23.748 "base_bdevs_list": [ 00:29:23.748 { 00:29:23.748 "name": null, 00:29:23.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.748 "is_configured": false, 00:29:23.748 "data_offset": 0, 00:29:23.748 "data_size": 65536 00:29:23.748 }, 00:29:23.748 { 00:29:23.748 "name": "BaseBdev2", 00:29:23.748 "uuid": "ceda9d4f-cd12-5c6e-9080-c0b8ad9180f7", 00:29:23.748 "is_configured": true, 00:29:23.748 "data_offset": 0, 00:29:23.748 "data_size": 65536 00:29:23.748 }, 00:29:23.748 { 00:29:23.748 "name": "BaseBdev3", 00:29:23.748 "uuid": "42df8338-c0be-5b98-b85b-10dfed0e4061", 00:29:23.748 "is_configured": true, 00:29:23.748 "data_offset": 0, 00:29:23.748 "data_size": 65536 00:29:23.748 }, 00:29:23.748 { 00:29:23.748 "name": "BaseBdev4", 00:29:23.748 "uuid": "e0c89059-4e43-520f-a2ac-c32b43f0e7af", 00:29:23.748 "is_configured": true, 00:29:23.748 "data_offset": 0, 00:29:23.748 "data_size": 65536 00:29:23.748 } 00:29:23.748 ] 00:29:23.748 }' 00:29:23.748 11:11:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:23.748 11:11:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.315 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:24.315 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:24.315 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:24.315 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:24.315 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:24.315 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.315 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.574 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:24.574 "name": "raid_bdev1", 00:29:24.574 "uuid": "2ac6be9b-b18c-4fa0-b874-496b0b489291", 00:29:24.574 "strip_size_kb": 0, 00:29:24.574 "state": "online", 00:29:24.574 "raid_level": "raid1", 00:29:24.574 "superblock": false, 00:29:24.574 "num_base_bdevs": 4, 00:29:24.574 "num_base_bdevs_discovered": 3, 00:29:24.574 "num_base_bdevs_operational": 3, 00:29:24.574 "base_bdevs_list": [ 00:29:24.574 { 00:29:24.574 "name": null, 00:29:24.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.574 "is_configured": false, 00:29:24.574 "data_offset": 0, 00:29:24.574 "data_size": 65536 00:29:24.574 }, 00:29:24.574 { 00:29:24.574 "name": "BaseBdev2", 00:29:24.574 "uuid": "ceda9d4f-cd12-5c6e-9080-c0b8ad9180f7", 00:29:24.574 "is_configured": true, 00:29:24.574 "data_offset": 0, 00:29:24.574 "data_size": 65536 00:29:24.574 }, 00:29:24.574 { 00:29:24.574 "name": "BaseBdev3", 00:29:24.574 "uuid": "42df8338-c0be-5b98-b85b-10dfed0e4061", 00:29:24.574 "is_configured": true, 00:29:24.574 "data_offset": 0, 00:29:24.574 "data_size": 65536 00:29:24.574 }, 00:29:24.574 { 00:29:24.574 "name": "BaseBdev4", 00:29:24.574 "uuid": "e0c89059-4e43-520f-a2ac-c32b43f0e7af", 00:29:24.574 "is_configured": true, 00:29:24.574 "data_offset": 0, 00:29:24.574 "data_size": 65536 00:29:24.574 } 00:29:24.574 ] 00:29:24.574 }' 00:29:24.574 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:24.574 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:24.574 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:24.574 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:24.574 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:24.833 [2024-07-25 11:11:31.858502] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:24.833 [2024-07-25 11:11:31.880177] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d14670 00:29:24.833 [2024-07-25 11:11:31.882520] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:24.833 11:11:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@678 -- # sleep 1 00:29:26.211 11:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:26.211 11:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:26.211 11:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:26.211 11:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:26.211 11:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:26.211 11:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.211 11:11:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.211 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:26.211 "name": "raid_bdev1", 00:29:26.211 "uuid": "2ac6be9b-b18c-4fa0-b874-496b0b489291", 00:29:26.211 "strip_size_kb": 0, 00:29:26.211 "state": "online", 00:29:26.211 "raid_level": "raid1", 00:29:26.211 "superblock": false, 00:29:26.211 "num_base_bdevs": 4, 00:29:26.211 "num_base_bdevs_discovered": 4, 00:29:26.211 "num_base_bdevs_operational": 4, 00:29:26.211 "process": { 00:29:26.211 "type": "rebuild", 00:29:26.211 "target": "spare", 00:29:26.211 "progress": { 00:29:26.211 "blocks": 24576, 00:29:26.211 "percent": 37 00:29:26.211 } 00:29:26.211 }, 00:29:26.211 "base_bdevs_list": [ 00:29:26.211 { 00:29:26.211 "name": "spare", 00:29:26.211 "uuid": "a60ad944-f4c3-502b-87da-8045ece9d2c4", 00:29:26.211 "is_configured": true, 00:29:26.211 "data_offset": 0, 00:29:26.211 "data_size": 65536 00:29:26.211 }, 00:29:26.211 { 00:29:26.211 "name": "BaseBdev2", 00:29:26.211 "uuid": "ceda9d4f-cd12-5c6e-9080-c0b8ad9180f7", 00:29:26.211 "is_configured": true, 00:29:26.211 "data_offset": 0, 00:29:26.211 "data_size": 65536 00:29:26.211 }, 00:29:26.211 { 00:29:26.211 "name": "BaseBdev3", 00:29:26.211 "uuid": "42df8338-c0be-5b98-b85b-10dfed0e4061", 00:29:26.211 "is_configured": true, 00:29:26.211 "data_offset": 0, 00:29:26.211 "data_size": 65536 00:29:26.211 }, 00:29:26.211 { 00:29:26.211 "name": "BaseBdev4", 00:29:26.211 "uuid": "e0c89059-4e43-520f-a2ac-c32b43f0e7af", 00:29:26.211 "is_configured": true, 00:29:26.211 "data_offset": 0, 00:29:26.211 "data_size": 65536 00:29:26.211 } 00:29:26.211 ] 00:29:26.211 }' 00:29:26.211 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:26.211 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:26.211 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:26.211 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:26.211 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:29:26.211 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:29:26.211 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:29:26.211 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:29:26.211 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:26.469 [2024-07-25 11:11:33.428399] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:26.469 [2024-07-25 11:11:33.495440] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d14670 00:29:26.469 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:29:26.469 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:29:26.469 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:26.469 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:26.469 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:26.469 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:26.469 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:26.469 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.469 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:26.728 "name": "raid_bdev1", 00:29:26.728 "uuid": "2ac6be9b-b18c-4fa0-b874-496b0b489291", 00:29:26.728 "strip_size_kb": 0, 00:29:26.728 "state": "online", 00:29:26.728 "raid_level": "raid1", 00:29:26.728 "superblock": false, 00:29:26.728 "num_base_bdevs": 4, 00:29:26.728 "num_base_bdevs_discovered": 3, 00:29:26.728 "num_base_bdevs_operational": 3, 00:29:26.728 "process": { 00:29:26.728 "type": "rebuild", 00:29:26.728 "target": "spare", 00:29:26.728 "progress": { 00:29:26.728 "blocks": 36864, 00:29:26.728 "percent": 56 00:29:26.728 } 00:29:26.728 }, 00:29:26.728 "base_bdevs_list": [ 00:29:26.728 { 00:29:26.728 "name": "spare", 00:29:26.728 "uuid": "a60ad944-f4c3-502b-87da-8045ece9d2c4", 00:29:26.728 "is_configured": true, 00:29:26.728 "data_offset": 0, 00:29:26.728 "data_size": 65536 00:29:26.728 }, 00:29:26.728 { 00:29:26.728 "name": null, 00:29:26.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.728 "is_configured": false, 00:29:26.728 "data_offset": 0, 00:29:26.728 "data_size": 65536 00:29:26.728 }, 00:29:26.728 { 00:29:26.728 "name": "BaseBdev3", 00:29:26.728 "uuid": "42df8338-c0be-5b98-b85b-10dfed0e4061", 00:29:26.728 "is_configured": true, 00:29:26.728 "data_offset": 0, 00:29:26.728 "data_size": 65536 00:29:26.728 }, 00:29:26.728 { 00:29:26.728 "name": "BaseBdev4", 00:29:26.728 "uuid": "e0c89059-4e43-520f-a2ac-c32b43f0e7af", 00:29:26.728 "is_configured": true, 00:29:26.728 "data_offset": 0, 00:29:26.728 "data_size": 65536 00:29:26.728 } 00:29:26.728 ] 00:29:26.728 }' 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # local timeout=977 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.728 11:11:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.987 11:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:26.987 "name": "raid_bdev1", 00:29:26.987 "uuid": "2ac6be9b-b18c-4fa0-b874-496b0b489291", 00:29:26.987 "strip_size_kb": 0, 00:29:26.987 "state": "online", 00:29:26.987 "raid_level": "raid1", 00:29:26.987 "superblock": false, 00:29:26.987 "num_base_bdevs": 4, 00:29:26.987 "num_base_bdevs_discovered": 3, 00:29:26.987 "num_base_bdevs_operational": 3, 00:29:26.987 "process": { 00:29:26.987 "type": "rebuild", 00:29:26.987 "target": "spare", 00:29:26.987 "progress": { 00:29:26.987 "blocks": 43008, 00:29:26.987 "percent": 65 00:29:26.987 } 00:29:26.987 }, 00:29:26.987 "base_bdevs_list": [ 00:29:26.987 { 00:29:26.987 "name": "spare", 00:29:26.987 "uuid": "a60ad944-f4c3-502b-87da-8045ece9d2c4", 00:29:26.987 "is_configured": true, 00:29:26.987 "data_offset": 0, 00:29:26.987 "data_size": 65536 00:29:26.987 }, 00:29:26.987 { 00:29:26.987 "name": null, 00:29:26.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.987 "is_configured": false, 00:29:26.987 "data_offset": 0, 00:29:26.987 "data_size": 65536 00:29:26.987 }, 00:29:26.987 { 00:29:26.987 "name": "BaseBdev3", 00:29:26.987 "uuid": "42df8338-c0be-5b98-b85b-10dfed0e4061", 00:29:26.987 "is_configured": true, 00:29:26.987 "data_offset": 0, 00:29:26.987 "data_size": 65536 00:29:26.987 }, 00:29:26.987 { 00:29:26.987 "name": "BaseBdev4", 00:29:26.987 "uuid": "e0c89059-4e43-520f-a2ac-c32b43f0e7af", 00:29:26.987 "is_configured": true, 00:29:26.987 "data_offset": 0, 00:29:26.987 "data_size": 65536 00:29:26.987 } 00:29:26.987 ] 00:29:26.987 }' 00:29:26.987 11:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:27.245 11:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:27.245 11:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:27.245 11:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:27.245 11:11:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:28.182 [2024-07-25 11:11:35.108414] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:28.182 [2024-07-25 11:11:35.108487] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:28.182 [2024-07-25 11:11:35.108540] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:28.182 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:28.182 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:28.182 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:28.182 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:28.182 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:28.182 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:28.182 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.182 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:28.441 "name": "raid_bdev1", 00:29:28.441 "uuid": "2ac6be9b-b18c-4fa0-b874-496b0b489291", 00:29:28.441 "strip_size_kb": 0, 00:29:28.441 "state": "online", 00:29:28.441 "raid_level": "raid1", 00:29:28.441 "superblock": false, 00:29:28.441 "num_base_bdevs": 4, 00:29:28.441 "num_base_bdevs_discovered": 3, 00:29:28.441 "num_base_bdevs_operational": 3, 00:29:28.441 "base_bdevs_list": [ 00:29:28.441 { 00:29:28.441 "name": "spare", 00:29:28.441 "uuid": "a60ad944-f4c3-502b-87da-8045ece9d2c4", 00:29:28.441 "is_configured": true, 00:29:28.441 "data_offset": 0, 00:29:28.441 "data_size": 65536 00:29:28.441 }, 00:29:28.441 { 00:29:28.441 "name": null, 00:29:28.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.441 "is_configured": false, 00:29:28.441 "data_offset": 0, 00:29:28.441 "data_size": 65536 00:29:28.441 }, 00:29:28.441 { 00:29:28.441 "name": "BaseBdev3", 00:29:28.441 "uuid": "42df8338-c0be-5b98-b85b-10dfed0e4061", 00:29:28.441 "is_configured": true, 00:29:28.441 "data_offset": 0, 00:29:28.441 "data_size": 65536 00:29:28.441 }, 00:29:28.441 { 00:29:28.441 "name": "BaseBdev4", 00:29:28.441 "uuid": "e0c89059-4e43-520f-a2ac-c32b43f0e7af", 00:29:28.441 "is_configured": true, 00:29:28.441 "data_offset": 0, 00:29:28.441 "data_size": 65536 00:29:28.441 } 00:29:28.441 ] 00:29:28.441 }' 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@724 -- # break 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.441 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:28.700 "name": "raid_bdev1", 00:29:28.700 "uuid": "2ac6be9b-b18c-4fa0-b874-496b0b489291", 00:29:28.700 "strip_size_kb": 0, 00:29:28.700 "state": "online", 00:29:28.700 "raid_level": "raid1", 00:29:28.700 "superblock": false, 00:29:28.700 "num_base_bdevs": 4, 00:29:28.700 "num_base_bdevs_discovered": 3, 00:29:28.700 "num_base_bdevs_operational": 3, 00:29:28.700 "base_bdevs_list": [ 00:29:28.700 { 00:29:28.700 "name": "spare", 00:29:28.700 "uuid": "a60ad944-f4c3-502b-87da-8045ece9d2c4", 00:29:28.700 "is_configured": true, 00:29:28.700 "data_offset": 0, 00:29:28.700 "data_size": 65536 00:29:28.700 }, 00:29:28.700 { 00:29:28.700 "name": null, 00:29:28.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.700 "is_configured": false, 00:29:28.700 "data_offset": 0, 00:29:28.700 "data_size": 65536 00:29:28.700 }, 00:29:28.700 { 00:29:28.700 "name": "BaseBdev3", 00:29:28.700 "uuid": "42df8338-c0be-5b98-b85b-10dfed0e4061", 00:29:28.700 "is_configured": true, 00:29:28.700 "data_offset": 0, 00:29:28.700 "data_size": 65536 00:29:28.700 }, 00:29:28.700 { 00:29:28.700 "name": "BaseBdev4", 00:29:28.700 "uuid": "e0c89059-4e43-520f-a2ac-c32b43f0e7af", 00:29:28.700 "is_configured": true, 00:29:28.700 "data_offset": 0, 00:29:28.700 "data_size": 65536 00:29:28.700 } 00:29:28.700 ] 00:29:28.700 }' 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.700 11:11:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.959 11:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:28.959 "name": "raid_bdev1", 00:29:28.959 "uuid": "2ac6be9b-b18c-4fa0-b874-496b0b489291", 00:29:28.959 "strip_size_kb": 0, 00:29:28.959 "state": "online", 00:29:28.959 "raid_level": "raid1", 00:29:28.959 "superblock": false, 00:29:28.959 "num_base_bdevs": 4, 00:29:28.959 "num_base_bdevs_discovered": 3, 00:29:28.959 "num_base_bdevs_operational": 3, 00:29:28.959 "base_bdevs_list": [ 00:29:28.959 { 00:29:28.959 "name": "spare", 00:29:28.959 "uuid": "a60ad944-f4c3-502b-87da-8045ece9d2c4", 00:29:28.959 "is_configured": true, 00:29:28.959 "data_offset": 0, 00:29:28.959 "data_size": 65536 00:29:28.959 }, 00:29:28.959 { 00:29:28.959 "name": null, 00:29:28.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.959 "is_configured": false, 00:29:28.959 "data_offset": 0, 00:29:28.959 "data_size": 65536 00:29:28.959 }, 00:29:28.959 { 00:29:28.959 "name": "BaseBdev3", 00:29:28.959 "uuid": "42df8338-c0be-5b98-b85b-10dfed0e4061", 00:29:28.959 "is_configured": true, 00:29:28.959 "data_offset": 0, 00:29:28.959 "data_size": 65536 00:29:28.959 }, 00:29:28.959 { 00:29:28.959 "name": "BaseBdev4", 00:29:28.959 "uuid": "e0c89059-4e43-520f-a2ac-c32b43f0e7af", 00:29:28.959 "is_configured": true, 00:29:28.959 "data_offset": 0, 00:29:28.959 "data_size": 65536 00:29:28.959 } 00:29:28.959 ] 00:29:28.959 }' 00:29:28.959 11:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:28.959 11:11:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:29.527 11:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:29.786 [2024-07-25 11:11:36.780712] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:29.786 [2024-07-25 11:11:36.780748] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:29.786 [2024-07-25 11:11:36.780831] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:29.786 [2024-07-25 11:11:36.780926] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:29.786 [2024-07-25 11:11:36.780943] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:29.786 11:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.786 11:11:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # jq length 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:30.045 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:30.303 /dev/nbd0 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:30.303 1+0 records in 00:29:30.303 1+0 records out 00:29:30.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024074 s, 17.0 MB/s 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:30.303 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:30.304 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:30.561 /dev/nbd1 00:29:30.561 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:30.561 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:30.561 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:30.561 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:29:30.561 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:30.561 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:30.561 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:30.561 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:29:30.561 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:30.561 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:30.561 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:30.561 1+0 records in 00:29:30.561 1+0 records out 00:29:30.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344893 s, 11.9 MB/s 00:29:30.562 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:30.562 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:29:30.562 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:30.562 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:30.562 11:11:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:29:30.562 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:30.562 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:30.562 11:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@753 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:30.821 11:11:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:30.821 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:30.821 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:30.821 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:30.821 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:30.821 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:30.821 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:30.821 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:31.080 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:31.080 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:31.080 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.080 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.080 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:31.080 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:31.080 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.080 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.080 11:11:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:31.080 11:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@798 -- # killprocess 3715362 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 3715362 ']' 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 3715362 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3715362 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3715362' 00:29:31.339 killing process with pid 3715362 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 3715362 00:29:31.339 Received shutdown signal, test time was about 60.000000 seconds 00:29:31.339 00:29:31.339 Latency(us) 00:29:31.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.339 =================================================================================================================== 00:29:31.339 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:31.339 [2024-07-25 11:11:38.269689] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:31.339 11:11:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 3715362 00:29:31.906 [2024-07-25 11:11:38.842971] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@800 -- # return 0 00:29:33.810 00:29:33.810 real 0m25.620s 00:29:33.810 user 0m34.372s 00:29:33.810 sys 0m4.394s 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:33.810 ************************************ 00:29:33.810 END TEST raid_rebuild_test 00:29:33.810 ************************************ 00:29:33.810 11:11:40 bdev_raid -- bdev/bdev_raid.sh@958 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:29:33.810 11:11:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:29:33.810 11:11:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:33.810 11:11:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:33.810 ************************************ 00:29:33.810 START TEST raid_rebuild_test_sb 00:29:33.810 ************************************ 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # local verify=true 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # local strip_size 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # local create_arg 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@594 -- # local data_offset 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # raid_pid=3719771 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # waitforlisten 3719771 /var/tmp/spdk-raid.sock 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 3719771 ']' 00:29:33.810 11:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:33.811 11:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:33.811 11:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:33.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:33.811 11:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:33.811 11:11:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.811 [2024-07-25 11:11:40.763170] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:33.811 [2024-07-25 11:11:40.763291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719771 ] 00:29:33.811 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:33.811 Zero copy mechanism will not be used. 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:01.0 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:01.1 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:01.2 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:01.3 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:01.4 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:01.5 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:01.6 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:01.7 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:02.0 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:02.1 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:02.2 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:02.3 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:02.4 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:02.5 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:02.6 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3d:02.7 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:01.0 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:01.1 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:01.2 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:01.3 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:01.4 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:01.5 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:01.6 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:01.7 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:02.0 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:02.1 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:02.2 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:02.3 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:02.4 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:02.5 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:02.6 cannot be used 00:29:33.811 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:29:33.811 EAL: Requested device 0000:3f:02.7 cannot be used 00:29:34.070 [2024-07-25 11:11:40.990233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.329 [2024-07-25 11:11:41.249074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.588 [2024-07-25 11:11:41.558070] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:34.588 [2024-07-25 11:11:41.558110] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:34.847 11:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:34.847 11:11:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:29:34.847 11:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:34.847 11:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:34.847 BaseBdev1_malloc 00:29:34.847 11:11:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:35.106 [2024-07-25 11:11:42.103630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:35.106 [2024-07-25 11:11:42.103695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.106 [2024-07-25 11:11:42.103726] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:29:35.106 [2024-07-25 11:11:42.103744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.106 [2024-07-25 11:11:42.106503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.106 [2024-07-25 11:11:42.106543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:35.106 BaseBdev1 00:29:35.106 11:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:35.106 11:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:35.365 BaseBdev2_malloc 00:29:35.365 11:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:35.624 [2024-07-25 11:11:42.495871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:35.624 [2024-07-25 11:11:42.495934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.624 [2024-07-25 11:11:42.495961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:29:35.624 [2024-07-25 11:11:42.495982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.624 [2024-07-25 11:11:42.498701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.624 [2024-07-25 11:11:42.498737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:35.624 BaseBdev2 00:29:35.624 11:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:35.624 11:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:35.624 BaseBdev3_malloc 00:29:35.624 11:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:35.883 [2024-07-25 11:11:42.881205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:35.883 [2024-07-25 11:11:42.881270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.883 [2024-07-25 11:11:42.881301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:29:35.883 [2024-07-25 11:11:42.881319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.883 [2024-07-25 11:11:42.883969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.883 [2024-07-25 11:11:42.884004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:35.883 BaseBdev3 00:29:35.883 11:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:29:35.883 11:11:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:36.142 BaseBdev4_malloc 00:29:36.142 11:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:36.402 [2024-07-25 11:11:43.265968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:36.402 [2024-07-25 11:11:43.266027] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:36.402 [2024-07-25 11:11:43.266051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:29:36.402 [2024-07-25 11:11:43.266069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:36.402 [2024-07-25 11:11:43.268781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:36.402 [2024-07-25 11:11:43.268815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:36.402 BaseBdev4 00:29:36.402 11:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:36.402 spare_malloc 00:29:36.402 11:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:36.661 spare_delay 00:29:36.661 11:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:36.920 [2024-07-25 11:11:43.823762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:36.920 [2024-07-25 11:11:43.823817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:36.920 [2024-07-25 11:11:43.823843] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:29:36.920 [2024-07-25 11:11:43.823861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:36.920 [2024-07-25 11:11:43.826575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:36.920 [2024-07-25 11:11:43.826611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:36.920 spare 00:29:36.920 11:11:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:36.920 [2024-07-25 11:11:43.988287] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:36.920 [2024-07-25 11:11:43.990630] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:36.920 [2024-07-25 11:11:43.990703] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:36.920 [2024-07-25 11:11:43.990771] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:36.920 [2024-07-25 11:11:43.991015] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:36.920 [2024-07-25 11:11:43.991035] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:36.920 [2024-07-25 11:11:43.991414] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:29:36.920 [2024-07-25 11:11:43.991693] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:36.920 [2024-07-25 11:11:43.991708] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:36.920 [2024-07-25 11:11:43.991918] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:36.920 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:36.920 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:36.920 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:36.920 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:36.920 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:36.920 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:36.920 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:36.920 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:36.920 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:36.920 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:36.920 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.920 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.179 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:37.179 "name": "raid_bdev1", 00:29:37.179 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:37.179 "strip_size_kb": 0, 00:29:37.179 "state": "online", 00:29:37.179 "raid_level": "raid1", 00:29:37.179 "superblock": true, 00:29:37.179 "num_base_bdevs": 4, 00:29:37.179 "num_base_bdevs_discovered": 4, 00:29:37.179 "num_base_bdevs_operational": 4, 00:29:37.179 "base_bdevs_list": [ 00:29:37.179 { 00:29:37.179 "name": "BaseBdev1", 00:29:37.179 "uuid": "b73e5c74-fb2c-5bfa-a977-df5281d65fc2", 00:29:37.179 "is_configured": true, 00:29:37.179 "data_offset": 2048, 00:29:37.179 "data_size": 63488 00:29:37.179 }, 00:29:37.179 { 00:29:37.179 "name": "BaseBdev2", 00:29:37.179 "uuid": "223ecf8c-e9fb-5ba0-97b9-9b066c10cde9", 00:29:37.179 "is_configured": true, 00:29:37.179 "data_offset": 2048, 00:29:37.179 "data_size": 63488 00:29:37.179 }, 00:29:37.179 { 00:29:37.179 "name": "BaseBdev3", 00:29:37.179 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:37.179 "is_configured": true, 00:29:37.179 "data_offset": 2048, 00:29:37.179 "data_size": 63488 00:29:37.179 }, 00:29:37.179 { 00:29:37.179 "name": "BaseBdev4", 00:29:37.179 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:37.179 "is_configured": true, 00:29:37.179 "data_offset": 2048, 00:29:37.179 "data_size": 63488 00:29:37.179 } 00:29:37.179 ] 00:29:37.179 }' 00:29:37.179 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:37.179 11:11:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:37.747 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:29:37.747 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:38.006 [2024-07-25 11:11:44.975351] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:38.006 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:29:38.006 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.006 11:11:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:38.266 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:38.866 [2024-07-25 11:11:45.705012] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:29:38.866 /dev/nbd0 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:38.866 1+0 records in 00:29:38.866 1+0 records out 00:29:38.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029079 s, 14.1 MB/s 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:29:38.866 11:11:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:29:45.437 63488+0 records in 00:29:45.437 63488+0 records out 00:29:45.437 32505856 bytes (33 MB, 31 MiB) copied, 6.46716 s, 5.0 MB/s 00:29:45.437 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:45.437 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:45.437 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:45.437 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:45.437 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:45.437 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:45.437 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:45.696 [2024-07-25 11:11:52.755323] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:45.696 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:45.696 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:45.696 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:45.696 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:45.696 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:45.696 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:45.696 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:45.696 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:45.697 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:45.956 [2024-07-25 11:11:52.978613] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:45.956 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:45.956 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:45.956 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:45.956 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:45.956 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:45.956 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:45.956 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:45.956 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:45.956 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:45.956 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:45.956 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.956 11:11:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.216 11:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:46.216 "name": "raid_bdev1", 00:29:46.216 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:46.216 "strip_size_kb": 0, 00:29:46.216 "state": "online", 00:29:46.216 "raid_level": "raid1", 00:29:46.216 "superblock": true, 00:29:46.216 "num_base_bdevs": 4, 00:29:46.216 "num_base_bdevs_discovered": 3, 00:29:46.216 "num_base_bdevs_operational": 3, 00:29:46.216 "base_bdevs_list": [ 00:29:46.216 { 00:29:46.216 "name": null, 00:29:46.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.216 "is_configured": false, 00:29:46.216 "data_offset": 2048, 00:29:46.216 "data_size": 63488 00:29:46.216 }, 00:29:46.216 { 00:29:46.216 "name": "BaseBdev2", 00:29:46.216 "uuid": "223ecf8c-e9fb-5ba0-97b9-9b066c10cde9", 00:29:46.216 "is_configured": true, 00:29:46.216 "data_offset": 2048, 00:29:46.216 "data_size": 63488 00:29:46.216 }, 00:29:46.216 { 00:29:46.216 "name": "BaseBdev3", 00:29:46.216 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:46.216 "is_configured": true, 00:29:46.216 "data_offset": 2048, 00:29:46.216 "data_size": 63488 00:29:46.216 }, 00:29:46.216 { 00:29:46.216 "name": "BaseBdev4", 00:29:46.216 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:46.216 "is_configured": true, 00:29:46.216 "data_offset": 2048, 00:29:46.216 "data_size": 63488 00:29:46.216 } 00:29:46.216 ] 00:29:46.216 }' 00:29:46.216 11:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:46.216 11:11:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:46.782 11:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:47.041 [2024-07-25 11:11:53.957270] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:47.041 [2024-07-25 11:11:53.983571] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caad40 00:29:47.041 [2024-07-25 11:11:53.985961] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:47.041 11:11:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:47.977 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:47.977 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:47.977 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:47.977 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:47.977 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:47.977 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.977 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.237 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:48.237 "name": "raid_bdev1", 00:29:48.237 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:48.237 "strip_size_kb": 0, 00:29:48.237 "state": "online", 00:29:48.237 "raid_level": "raid1", 00:29:48.237 "superblock": true, 00:29:48.237 "num_base_bdevs": 4, 00:29:48.237 "num_base_bdevs_discovered": 4, 00:29:48.237 "num_base_bdevs_operational": 4, 00:29:48.237 "process": { 00:29:48.237 "type": "rebuild", 00:29:48.237 "target": "spare", 00:29:48.237 "progress": { 00:29:48.237 "blocks": 24576, 00:29:48.237 "percent": 38 00:29:48.237 } 00:29:48.237 }, 00:29:48.237 "base_bdevs_list": [ 00:29:48.237 { 00:29:48.237 "name": "spare", 00:29:48.237 "uuid": "de496aef-359e-5dd1-9f36-74b531131eff", 00:29:48.237 "is_configured": true, 00:29:48.237 "data_offset": 2048, 00:29:48.237 "data_size": 63488 00:29:48.237 }, 00:29:48.237 { 00:29:48.237 "name": "BaseBdev2", 00:29:48.237 "uuid": "223ecf8c-e9fb-5ba0-97b9-9b066c10cde9", 00:29:48.237 "is_configured": true, 00:29:48.237 "data_offset": 2048, 00:29:48.237 "data_size": 63488 00:29:48.237 }, 00:29:48.237 { 00:29:48.237 "name": "BaseBdev3", 00:29:48.237 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:48.237 "is_configured": true, 00:29:48.237 "data_offset": 2048, 00:29:48.237 "data_size": 63488 00:29:48.237 }, 00:29:48.237 { 00:29:48.237 "name": "BaseBdev4", 00:29:48.237 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:48.237 "is_configured": true, 00:29:48.237 "data_offset": 2048, 00:29:48.237 "data_size": 63488 00:29:48.237 } 00:29:48.237 ] 00:29:48.237 }' 00:29:48.237 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:48.237 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:48.237 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:48.237 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:48.237 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:48.496 [2024-07-25 11:11:55.523369] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:48.497 [2024-07-25 11:11:55.598978] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:48.497 [2024-07-25 11:11:55.599046] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:48.497 [2024-07-25 11:11:55.599070] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:48.497 [2024-07-25 11:11:55.599085] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:48.756 "name": "raid_bdev1", 00:29:48.756 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:48.756 "strip_size_kb": 0, 00:29:48.756 "state": "online", 00:29:48.756 "raid_level": "raid1", 00:29:48.756 "superblock": true, 00:29:48.756 "num_base_bdevs": 4, 00:29:48.756 "num_base_bdevs_discovered": 3, 00:29:48.756 "num_base_bdevs_operational": 3, 00:29:48.756 "base_bdevs_list": [ 00:29:48.756 { 00:29:48.756 "name": null, 00:29:48.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.756 "is_configured": false, 00:29:48.756 "data_offset": 2048, 00:29:48.756 "data_size": 63488 00:29:48.756 }, 00:29:48.756 { 00:29:48.756 "name": "BaseBdev2", 00:29:48.756 "uuid": "223ecf8c-e9fb-5ba0-97b9-9b066c10cde9", 00:29:48.756 "is_configured": true, 00:29:48.756 "data_offset": 2048, 00:29:48.756 "data_size": 63488 00:29:48.756 }, 00:29:48.756 { 00:29:48.756 "name": "BaseBdev3", 00:29:48.756 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:48.756 "is_configured": true, 00:29:48.756 "data_offset": 2048, 00:29:48.756 "data_size": 63488 00:29:48.756 }, 00:29:48.756 { 00:29:48.756 "name": "BaseBdev4", 00:29:48.756 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:48.756 "is_configured": true, 00:29:48.756 "data_offset": 2048, 00:29:48.756 "data_size": 63488 00:29:48.756 } 00:29:48.756 ] 00:29:48.756 }' 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:48.756 11:11:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:49.692 11:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:49.692 11:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:49.692 11:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:49.692 11:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:49.692 11:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:49.692 11:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.692 11:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:49.950 11:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:49.950 "name": "raid_bdev1", 00:29:49.950 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:49.950 "strip_size_kb": 0, 00:29:49.950 "state": "online", 00:29:49.950 "raid_level": "raid1", 00:29:49.950 "superblock": true, 00:29:49.950 "num_base_bdevs": 4, 00:29:49.950 "num_base_bdevs_discovered": 3, 00:29:49.950 "num_base_bdevs_operational": 3, 00:29:49.950 "base_bdevs_list": [ 00:29:49.950 { 00:29:49.950 "name": null, 00:29:49.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:49.950 "is_configured": false, 00:29:49.950 "data_offset": 2048, 00:29:49.950 "data_size": 63488 00:29:49.950 }, 00:29:49.950 { 00:29:49.950 "name": "BaseBdev2", 00:29:49.950 "uuid": "223ecf8c-e9fb-5ba0-97b9-9b066c10cde9", 00:29:49.950 "is_configured": true, 00:29:49.950 "data_offset": 2048, 00:29:49.950 "data_size": 63488 00:29:49.950 }, 00:29:49.950 { 00:29:49.950 "name": "BaseBdev3", 00:29:49.950 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:49.950 "is_configured": true, 00:29:49.950 "data_offset": 2048, 00:29:49.950 "data_size": 63488 00:29:49.950 }, 00:29:49.950 { 00:29:49.950 "name": "BaseBdev4", 00:29:49.950 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:49.950 "is_configured": true, 00:29:49.950 "data_offset": 2048, 00:29:49.950 "data_size": 63488 00:29:49.950 } 00:29:49.950 ] 00:29:49.950 }' 00:29:49.950 11:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:49.950 11:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:49.950 11:11:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:49.950 11:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:49.950 11:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:50.209 [2024-07-25 11:11:57.230935] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:50.209 [2024-07-25 11:11:57.252066] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caae10 00:29:50.209 [2024-07-25 11:11:57.254455] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:50.209 11:11:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@678 -- # sleep 1 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:51.587 "name": "raid_bdev1", 00:29:51.587 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:51.587 "strip_size_kb": 0, 00:29:51.587 "state": "online", 00:29:51.587 "raid_level": "raid1", 00:29:51.587 "superblock": true, 00:29:51.587 "num_base_bdevs": 4, 00:29:51.587 "num_base_bdevs_discovered": 4, 00:29:51.587 "num_base_bdevs_operational": 4, 00:29:51.587 "process": { 00:29:51.587 "type": "rebuild", 00:29:51.587 "target": "spare", 00:29:51.587 "progress": { 00:29:51.587 "blocks": 24576, 00:29:51.587 "percent": 38 00:29:51.587 } 00:29:51.587 }, 00:29:51.587 "base_bdevs_list": [ 00:29:51.587 { 00:29:51.587 "name": "spare", 00:29:51.587 "uuid": "de496aef-359e-5dd1-9f36-74b531131eff", 00:29:51.587 "is_configured": true, 00:29:51.587 "data_offset": 2048, 00:29:51.587 "data_size": 63488 00:29:51.587 }, 00:29:51.587 { 00:29:51.587 "name": "BaseBdev2", 00:29:51.587 "uuid": "223ecf8c-e9fb-5ba0-97b9-9b066c10cde9", 00:29:51.587 "is_configured": true, 00:29:51.587 "data_offset": 2048, 00:29:51.587 "data_size": 63488 00:29:51.587 }, 00:29:51.587 { 00:29:51.587 "name": "BaseBdev3", 00:29:51.587 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:51.587 "is_configured": true, 00:29:51.587 "data_offset": 2048, 00:29:51.587 "data_size": 63488 00:29:51.587 }, 00:29:51.587 { 00:29:51.587 "name": "BaseBdev4", 00:29:51.587 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:51.587 "is_configured": true, 00:29:51.587 "data_offset": 2048, 00:29:51.587 "data_size": 63488 00:29:51.587 } 00:29:51.587 ] 00:29:51.587 }' 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:29:51.587 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:29:51.587 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:51.847 [2024-07-25 11:11:58.808551] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:52.107 [2024-07-25 11:11:58.967780] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000caae10 00:29:52.107 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:29:52.107 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:29:52.107 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:52.107 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:52.107 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:52.107 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:52.107 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:52.107 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:52.107 11:11:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.107 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:52.107 "name": "raid_bdev1", 00:29:52.107 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:52.107 "strip_size_kb": 0, 00:29:52.107 "state": "online", 00:29:52.107 "raid_level": "raid1", 00:29:52.107 "superblock": true, 00:29:52.107 "num_base_bdevs": 4, 00:29:52.107 "num_base_bdevs_discovered": 3, 00:29:52.107 "num_base_bdevs_operational": 3, 00:29:52.107 "process": { 00:29:52.107 "type": "rebuild", 00:29:52.107 "target": "spare", 00:29:52.107 "progress": { 00:29:52.107 "blocks": 36864, 00:29:52.107 "percent": 58 00:29:52.107 } 00:29:52.107 }, 00:29:52.107 "base_bdevs_list": [ 00:29:52.107 { 00:29:52.107 "name": "spare", 00:29:52.107 "uuid": "de496aef-359e-5dd1-9f36-74b531131eff", 00:29:52.107 "is_configured": true, 00:29:52.107 "data_offset": 2048, 00:29:52.107 "data_size": 63488 00:29:52.107 }, 00:29:52.107 { 00:29:52.107 "name": null, 00:29:52.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.107 "is_configured": false, 00:29:52.107 "data_offset": 2048, 00:29:52.107 "data_size": 63488 00:29:52.107 }, 00:29:52.107 { 00:29:52.107 "name": "BaseBdev3", 00:29:52.107 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:52.107 "is_configured": true, 00:29:52.107 "data_offset": 2048, 00:29:52.107 "data_size": 63488 00:29:52.107 }, 00:29:52.107 { 00:29:52.107 "name": "BaseBdev4", 00:29:52.107 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:52.107 "is_configured": true, 00:29:52.107 "data_offset": 2048, 00:29:52.107 "data_size": 63488 00:29:52.107 } 00:29:52.107 ] 00:29:52.107 }' 00:29:52.107 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:52.367 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:52.367 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:52.367 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:52.367 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # local timeout=1003 00:29:52.367 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:52.367 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:52.367 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:52.367 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:52.367 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:52.367 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:52.367 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:52.367 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.626 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:52.626 "name": "raid_bdev1", 00:29:52.626 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:52.626 "strip_size_kb": 0, 00:29:52.626 "state": "online", 00:29:52.626 "raid_level": "raid1", 00:29:52.626 "superblock": true, 00:29:52.626 "num_base_bdevs": 4, 00:29:52.626 "num_base_bdevs_discovered": 3, 00:29:52.626 "num_base_bdevs_operational": 3, 00:29:52.626 "process": { 00:29:52.626 "type": "rebuild", 00:29:52.626 "target": "spare", 00:29:52.626 "progress": { 00:29:52.626 "blocks": 43008, 00:29:52.626 "percent": 67 00:29:52.626 } 00:29:52.626 }, 00:29:52.626 "base_bdevs_list": [ 00:29:52.626 { 00:29:52.626 "name": "spare", 00:29:52.626 "uuid": "de496aef-359e-5dd1-9f36-74b531131eff", 00:29:52.626 "is_configured": true, 00:29:52.626 "data_offset": 2048, 00:29:52.626 "data_size": 63488 00:29:52.626 }, 00:29:52.626 { 00:29:52.626 "name": null, 00:29:52.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.626 "is_configured": false, 00:29:52.626 "data_offset": 2048, 00:29:52.626 "data_size": 63488 00:29:52.626 }, 00:29:52.626 { 00:29:52.626 "name": "BaseBdev3", 00:29:52.626 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:52.626 "is_configured": true, 00:29:52.626 "data_offset": 2048, 00:29:52.626 "data_size": 63488 00:29:52.626 }, 00:29:52.626 { 00:29:52.626 "name": "BaseBdev4", 00:29:52.626 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:52.626 "is_configured": true, 00:29:52.626 "data_offset": 2048, 00:29:52.626 "data_size": 63488 00:29:52.626 } 00:29:52.627 ] 00:29:52.627 }' 00:29:52.627 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:52.627 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:52.627 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:52.627 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:52.627 11:11:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@726 -- # sleep 1 00:29:53.563 [2024-07-25 11:12:00.480031] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:53.563 [2024-07-25 11:12:00.480119] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:53.563 [2024-07-25 11:12:00.480246] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:53.563 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:29:53.563 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:53.563 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:53.563 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:53.563 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:53.563 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:53.563 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.563 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.823 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:53.823 "name": "raid_bdev1", 00:29:53.823 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:53.823 "strip_size_kb": 0, 00:29:53.823 "state": "online", 00:29:53.823 "raid_level": "raid1", 00:29:53.823 "superblock": true, 00:29:53.823 "num_base_bdevs": 4, 00:29:53.823 "num_base_bdevs_discovered": 3, 00:29:53.823 "num_base_bdevs_operational": 3, 00:29:53.823 "base_bdevs_list": [ 00:29:53.823 { 00:29:53.823 "name": "spare", 00:29:53.823 "uuid": "de496aef-359e-5dd1-9f36-74b531131eff", 00:29:53.823 "is_configured": true, 00:29:53.823 "data_offset": 2048, 00:29:53.823 "data_size": 63488 00:29:53.823 }, 00:29:53.823 { 00:29:53.823 "name": null, 00:29:53.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.823 "is_configured": false, 00:29:53.823 "data_offset": 2048, 00:29:53.823 "data_size": 63488 00:29:53.823 }, 00:29:53.823 { 00:29:53.823 "name": "BaseBdev3", 00:29:53.823 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:53.823 "is_configured": true, 00:29:53.823 "data_offset": 2048, 00:29:53.823 "data_size": 63488 00:29:53.823 }, 00:29:53.823 { 00:29:53.823 "name": "BaseBdev4", 00:29:53.823 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:53.823 "is_configured": true, 00:29:53.823 "data_offset": 2048, 00:29:53.823 "data_size": 63488 00:29:53.823 } 00:29:53.823 ] 00:29:53.823 }' 00:29:53.823 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:53.823 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:53.823 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:54.082 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:54.082 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@724 -- # break 00:29:54.082 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:54.082 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:54.082 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:54.082 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:54.082 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:54.082 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.082 11:12:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.082 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:54.082 "name": "raid_bdev1", 00:29:54.082 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:54.082 "strip_size_kb": 0, 00:29:54.082 "state": "online", 00:29:54.082 "raid_level": "raid1", 00:29:54.082 "superblock": true, 00:29:54.082 "num_base_bdevs": 4, 00:29:54.082 "num_base_bdevs_discovered": 3, 00:29:54.082 "num_base_bdevs_operational": 3, 00:29:54.082 "base_bdevs_list": [ 00:29:54.082 { 00:29:54.082 "name": "spare", 00:29:54.082 "uuid": "de496aef-359e-5dd1-9f36-74b531131eff", 00:29:54.082 "is_configured": true, 00:29:54.082 "data_offset": 2048, 00:29:54.082 "data_size": 63488 00:29:54.082 }, 00:29:54.082 { 00:29:54.082 "name": null, 00:29:54.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.082 "is_configured": false, 00:29:54.082 "data_offset": 2048, 00:29:54.082 "data_size": 63488 00:29:54.082 }, 00:29:54.082 { 00:29:54.082 "name": "BaseBdev3", 00:29:54.082 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:54.082 "is_configured": true, 00:29:54.082 "data_offset": 2048, 00:29:54.082 "data_size": 63488 00:29:54.082 }, 00:29:54.082 { 00:29:54.082 "name": "BaseBdev4", 00:29:54.082 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:54.082 "is_configured": true, 00:29:54.082 "data_offset": 2048, 00:29:54.082 "data_size": 63488 00:29:54.082 } 00:29:54.082 ] 00:29:54.082 }' 00:29:54.082 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:54.341 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:54.341 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:54.341 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:54.342 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:54.342 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:54.342 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:54.342 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:54.342 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:54.342 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:54.342 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:54.342 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:54.342 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:54.342 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:54.342 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:54.342 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:54.600 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:54.600 "name": "raid_bdev1", 00:29:54.600 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:54.600 "strip_size_kb": 0, 00:29:54.600 "state": "online", 00:29:54.600 "raid_level": "raid1", 00:29:54.600 "superblock": true, 00:29:54.600 "num_base_bdevs": 4, 00:29:54.600 "num_base_bdevs_discovered": 3, 00:29:54.600 "num_base_bdevs_operational": 3, 00:29:54.600 "base_bdevs_list": [ 00:29:54.600 { 00:29:54.600 "name": "spare", 00:29:54.600 "uuid": "de496aef-359e-5dd1-9f36-74b531131eff", 00:29:54.600 "is_configured": true, 00:29:54.600 "data_offset": 2048, 00:29:54.600 "data_size": 63488 00:29:54.600 }, 00:29:54.600 { 00:29:54.600 "name": null, 00:29:54.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.600 "is_configured": false, 00:29:54.600 "data_offset": 2048, 00:29:54.600 "data_size": 63488 00:29:54.600 }, 00:29:54.600 { 00:29:54.600 "name": "BaseBdev3", 00:29:54.600 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:54.600 "is_configured": true, 00:29:54.600 "data_offset": 2048, 00:29:54.600 "data_size": 63488 00:29:54.600 }, 00:29:54.600 { 00:29:54.600 "name": "BaseBdev4", 00:29:54.600 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:54.600 "is_configured": true, 00:29:54.600 "data_offset": 2048, 00:29:54.600 "data_size": 63488 00:29:54.600 } 00:29:54.600 ] 00:29:54.600 }' 00:29:54.600 11:12:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:54.600 11:12:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:55.166 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:55.166 [2024-07-25 11:12:02.273222] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:55.167 [2024-07-25 11:12:02.273262] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:55.167 [2024-07-25 11:12:02.273360] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:55.167 [2024-07-25 11:12:02.273454] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:55.167 [2024-07-25 11:12:02.273472] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # jq length 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:55.425 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:55.687 /dev/nbd0 00:29:55.687 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:55.687 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:55.687 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:29:55.687 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:55.687 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:55.687 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:55.688 1+0 records in 00:29:55.688 1+0 records out 00:29:55.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164103 s, 25.0 MB/s 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:55.688 11:12:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:55.946 /dev/nbd1 00:29:55.946 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:55.946 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:55.946 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:29:55.946 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:29:55.946 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:29:55.946 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:29:55.946 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:29:55.946 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:29:55.946 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:29:55.946 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:29:55.946 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:55.946 1+0 records in 00:29:55.946 1+0 records out 00:29:55.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027032 s, 15.2 MB/s 00:29:55.946 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:56.205 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:56.470 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:56.470 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:56.470 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:56.470 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:56.470 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:56.470 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:56.470 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:56.470 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:56.470 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:56.470 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:56.728 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:56.728 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:56.728 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:56.728 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:56.728 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:56.728 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:56.728 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:29:56.728 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:29:56.728 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:29:56.728 11:12:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:56.987 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:57.259 [2024-07-25 11:12:04.220049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:57.259 [2024-07-25 11:12:04.220111] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:57.259 [2024-07-25 11:12:04.220151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044780 00:29:57.259 [2024-07-25 11:12:04.220169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:57.259 [2024-07-25 11:12:04.223018] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:57.259 [2024-07-25 11:12:04.223052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:57.259 [2024-07-25 11:12:04.223171] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:57.259 [2024-07-25 11:12:04.223235] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:57.259 [2024-07-25 11:12:04.223446] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:57.259 [2024-07-25 11:12:04.223557] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:57.259 spare 00:29:57.259 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:57.259 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:57.259 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:57.259 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:57.259 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:57.259 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:57.259 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:57.259 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:57.259 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:57.259 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:57.259 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.259 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:57.259 [2024-07-25 11:12:04.323896] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:29:57.259 [2024-07-25 11:12:04.323924] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:57.259 [2024-07-25 11:12:04.324290] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc94c0 00:29:57.259 [2024-07-25 11:12:04.324548] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:29:57.259 [2024-07-25 11:12:04.324568] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:29:57.259 [2024-07-25 11:12:04.324777] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:57.567 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:57.567 "name": "raid_bdev1", 00:29:57.567 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:57.567 "strip_size_kb": 0, 00:29:57.567 "state": "online", 00:29:57.567 "raid_level": "raid1", 00:29:57.567 "superblock": true, 00:29:57.567 "num_base_bdevs": 4, 00:29:57.567 "num_base_bdevs_discovered": 3, 00:29:57.567 "num_base_bdevs_operational": 3, 00:29:57.567 "base_bdevs_list": [ 00:29:57.567 { 00:29:57.567 "name": "spare", 00:29:57.567 "uuid": "de496aef-359e-5dd1-9f36-74b531131eff", 00:29:57.567 "is_configured": true, 00:29:57.567 "data_offset": 2048, 00:29:57.567 "data_size": 63488 00:29:57.567 }, 00:29:57.567 { 00:29:57.567 "name": null, 00:29:57.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.567 "is_configured": false, 00:29:57.567 "data_offset": 2048, 00:29:57.567 "data_size": 63488 00:29:57.567 }, 00:29:57.567 { 00:29:57.567 "name": "BaseBdev3", 00:29:57.567 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:57.567 "is_configured": true, 00:29:57.567 "data_offset": 2048, 00:29:57.567 "data_size": 63488 00:29:57.567 }, 00:29:57.567 { 00:29:57.567 "name": "BaseBdev4", 00:29:57.567 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:57.567 "is_configured": true, 00:29:57.567 "data_offset": 2048, 00:29:57.567 "data_size": 63488 00:29:57.567 } 00:29:57.567 ] 00:29:57.567 }' 00:29:57.567 11:12:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:57.567 11:12:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:58.135 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:58.135 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:58.135 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:58.135 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:58.135 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:58.135 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.135 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:58.394 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:58.394 "name": "raid_bdev1", 00:29:58.394 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:58.394 "strip_size_kb": 0, 00:29:58.394 "state": "online", 00:29:58.394 "raid_level": "raid1", 00:29:58.394 "superblock": true, 00:29:58.394 "num_base_bdevs": 4, 00:29:58.394 "num_base_bdevs_discovered": 3, 00:29:58.394 "num_base_bdevs_operational": 3, 00:29:58.394 "base_bdevs_list": [ 00:29:58.394 { 00:29:58.394 "name": "spare", 00:29:58.394 "uuid": "de496aef-359e-5dd1-9f36-74b531131eff", 00:29:58.394 "is_configured": true, 00:29:58.394 "data_offset": 2048, 00:29:58.394 "data_size": 63488 00:29:58.394 }, 00:29:58.394 { 00:29:58.394 "name": null, 00:29:58.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.394 "is_configured": false, 00:29:58.394 "data_offset": 2048, 00:29:58.394 "data_size": 63488 00:29:58.394 }, 00:29:58.394 { 00:29:58.394 "name": "BaseBdev3", 00:29:58.394 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:58.394 "is_configured": true, 00:29:58.394 "data_offset": 2048, 00:29:58.394 "data_size": 63488 00:29:58.394 }, 00:29:58.394 { 00:29:58.394 "name": "BaseBdev4", 00:29:58.394 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:58.394 "is_configured": true, 00:29:58.394 "data_offset": 2048, 00:29:58.394 "data_size": 63488 00:29:58.394 } 00:29:58.394 ] 00:29:58.394 }' 00:29:58.394 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:58.394 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:58.394 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:58.394 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:58.395 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.395 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:58.654 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:29:58.654 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:58.913 [2024-07-25 11:12:05.796809] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:58.913 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:58.913 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:58.913 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:58.913 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:58.913 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:58.913 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:58.913 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:58.913 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:58.913 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:58.913 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:58.913 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.913 11:12:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.173 11:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:59.173 "name": "raid_bdev1", 00:29:59.173 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:29:59.173 "strip_size_kb": 0, 00:29:59.173 "state": "online", 00:29:59.173 "raid_level": "raid1", 00:29:59.173 "superblock": true, 00:29:59.173 "num_base_bdevs": 4, 00:29:59.173 "num_base_bdevs_discovered": 2, 00:29:59.173 "num_base_bdevs_operational": 2, 00:29:59.173 "base_bdevs_list": [ 00:29:59.173 { 00:29:59.173 "name": null, 00:29:59.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.173 "is_configured": false, 00:29:59.173 "data_offset": 2048, 00:29:59.173 "data_size": 63488 00:29:59.173 }, 00:29:59.173 { 00:29:59.173 "name": null, 00:29:59.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.173 "is_configured": false, 00:29:59.173 "data_offset": 2048, 00:29:59.173 "data_size": 63488 00:29:59.173 }, 00:29:59.173 { 00:29:59.173 "name": "BaseBdev3", 00:29:59.173 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:29:59.173 "is_configured": true, 00:29:59.173 "data_offset": 2048, 00:29:59.173 "data_size": 63488 00:29:59.173 }, 00:29:59.173 { 00:29:59.173 "name": "BaseBdev4", 00:29:59.173 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:29:59.173 "is_configured": true, 00:29:59.173 "data_offset": 2048, 00:29:59.173 "data_size": 63488 00:29:59.173 } 00:29:59.173 ] 00:29:59.173 }' 00:29:59.173 11:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:59.173 11:12:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:59.741 11:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:59.741 [2024-07-25 11:12:06.839847] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:59.742 [2024-07-25 11:12:06.840074] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:59.742 [2024-07-25 11:12:06.840101] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:59.742 [2024-07-25 11:12:06.840147] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:00.001 [2024-07-25 11:12:06.863379] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc9590 00:30:00.001 [2024-07-25 11:12:06.865771] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:00.001 11:12:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # sleep 1 00:30:00.938 11:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:00.938 11:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:00.938 11:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:00.938 11:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:00.938 11:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:00.938 11:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:00.938 11:12:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.197 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:01.198 "name": "raid_bdev1", 00:30:01.198 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:30:01.198 "strip_size_kb": 0, 00:30:01.198 "state": "online", 00:30:01.198 "raid_level": "raid1", 00:30:01.198 "superblock": true, 00:30:01.198 "num_base_bdevs": 4, 00:30:01.198 "num_base_bdevs_discovered": 3, 00:30:01.198 "num_base_bdevs_operational": 3, 00:30:01.198 "process": { 00:30:01.198 "type": "rebuild", 00:30:01.198 "target": "spare", 00:30:01.198 "progress": { 00:30:01.198 "blocks": 24576, 00:30:01.198 "percent": 38 00:30:01.198 } 00:30:01.198 }, 00:30:01.198 "base_bdevs_list": [ 00:30:01.198 { 00:30:01.198 "name": "spare", 00:30:01.198 "uuid": "de496aef-359e-5dd1-9f36-74b531131eff", 00:30:01.198 "is_configured": true, 00:30:01.198 "data_offset": 2048, 00:30:01.198 "data_size": 63488 00:30:01.198 }, 00:30:01.198 { 00:30:01.198 "name": null, 00:30:01.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.198 "is_configured": false, 00:30:01.198 "data_offset": 2048, 00:30:01.198 "data_size": 63488 00:30:01.198 }, 00:30:01.198 { 00:30:01.198 "name": "BaseBdev3", 00:30:01.198 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:30:01.198 "is_configured": true, 00:30:01.198 "data_offset": 2048, 00:30:01.198 "data_size": 63488 00:30:01.198 }, 00:30:01.198 { 00:30:01.198 "name": "BaseBdev4", 00:30:01.198 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:30:01.198 "is_configured": true, 00:30:01.198 "data_offset": 2048, 00:30:01.198 "data_size": 63488 00:30:01.198 } 00:30:01.198 ] 00:30:01.198 }' 00:30:01.198 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:01.198 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:01.198 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:01.198 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:01.198 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:01.457 [2024-07-25 11:12:08.406756] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:01.457 [2024-07-25 11:12:08.478838] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:01.457 [2024-07-25 11:12:08.478898] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:01.457 [2024-07-25 11:12:08.478924] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:01.457 [2024-07-25 11:12:08.478936] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:01.457 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:01.457 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:01.457 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:01.457 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:01.457 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:01.457 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:01.457 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:01.457 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:01.457 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:01.457 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:01.457 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.457 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.716 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:01.716 "name": "raid_bdev1", 00:30:01.716 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:30:01.716 "strip_size_kb": 0, 00:30:01.716 "state": "online", 00:30:01.716 "raid_level": "raid1", 00:30:01.716 "superblock": true, 00:30:01.716 "num_base_bdevs": 4, 00:30:01.716 "num_base_bdevs_discovered": 2, 00:30:01.717 "num_base_bdevs_operational": 2, 00:30:01.717 "base_bdevs_list": [ 00:30:01.717 { 00:30:01.717 "name": null, 00:30:01.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.717 "is_configured": false, 00:30:01.717 "data_offset": 2048, 00:30:01.717 "data_size": 63488 00:30:01.717 }, 00:30:01.717 { 00:30:01.717 "name": null, 00:30:01.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.717 "is_configured": false, 00:30:01.717 "data_offset": 2048, 00:30:01.717 "data_size": 63488 00:30:01.717 }, 00:30:01.717 { 00:30:01.717 "name": "BaseBdev3", 00:30:01.717 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:30:01.717 "is_configured": true, 00:30:01.717 "data_offset": 2048, 00:30:01.717 "data_size": 63488 00:30:01.717 }, 00:30:01.717 { 00:30:01.717 "name": "BaseBdev4", 00:30:01.717 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:30:01.717 "is_configured": true, 00:30:01.717 "data_offset": 2048, 00:30:01.717 "data_size": 63488 00:30:01.717 } 00:30:01.717 ] 00:30:01.717 }' 00:30:01.717 11:12:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:01.717 11:12:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:02.284 11:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:02.548 [2024-07-25 11:12:09.534255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:02.548 [2024-07-25 11:12:09.534333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:02.548 [2024-07-25 11:12:09.534370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045080 00:30:02.548 [2024-07-25 11:12:09.534386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:02.548 [2024-07-25 11:12:09.535020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:02.548 [2024-07-25 11:12:09.535047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:02.548 [2024-07-25 11:12:09.535175] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:02.548 [2024-07-25 11:12:09.535202] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:02.548 [2024-07-25 11:12:09.535221] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:02.548 [2024-07-25 11:12:09.535254] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:02.548 [2024-07-25 11:12:09.557317] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc9660 00:30:02.548 spare 00:30:02.548 [2024-07-25 11:12:09.559721] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:02.548 11:12:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # sleep 1 00:30:03.484 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:03.484 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:03.484 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:03.484 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:03.484 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:03.484 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.484 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.744 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:03.744 "name": "raid_bdev1", 00:30:03.744 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:30:03.744 "strip_size_kb": 0, 00:30:03.744 "state": "online", 00:30:03.744 "raid_level": "raid1", 00:30:03.744 "superblock": true, 00:30:03.744 "num_base_bdevs": 4, 00:30:03.744 "num_base_bdevs_discovered": 3, 00:30:03.744 "num_base_bdevs_operational": 3, 00:30:03.744 "process": { 00:30:03.744 "type": "rebuild", 00:30:03.744 "target": "spare", 00:30:03.744 "progress": { 00:30:03.744 "blocks": 24576, 00:30:03.744 "percent": 38 00:30:03.744 } 00:30:03.744 }, 00:30:03.744 "base_bdevs_list": [ 00:30:03.744 { 00:30:03.744 "name": "spare", 00:30:03.744 "uuid": "de496aef-359e-5dd1-9f36-74b531131eff", 00:30:03.744 "is_configured": true, 00:30:03.744 "data_offset": 2048, 00:30:03.744 "data_size": 63488 00:30:03.744 }, 00:30:03.744 { 00:30:03.744 "name": null, 00:30:03.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.744 "is_configured": false, 00:30:03.744 "data_offset": 2048, 00:30:03.744 "data_size": 63488 00:30:03.744 }, 00:30:03.744 { 00:30:03.744 "name": "BaseBdev3", 00:30:03.744 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:30:03.744 "is_configured": true, 00:30:03.744 "data_offset": 2048, 00:30:03.744 "data_size": 63488 00:30:03.744 }, 00:30:03.744 { 00:30:03.744 "name": "BaseBdev4", 00:30:03.744 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:30:03.744 "is_configured": true, 00:30:03.744 "data_offset": 2048, 00:30:03.744 "data_size": 63488 00:30:03.744 } 00:30:03.744 ] 00:30:03.744 }' 00:30:03.744 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:03.744 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:04.003 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:04.003 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:04.003 11:12:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:04.003 [2024-07-25 11:12:11.113264] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:04.262 [2024-07-25 11:12:11.172751] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:04.262 [2024-07-25 11:12:11.172815] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:04.262 [2024-07-25 11:12:11.172838] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:04.262 [2024-07-25 11:12:11.172853] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:04.262 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:04.262 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:04.262 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:04.262 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:04.262 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:04.262 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:04.262 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:04.262 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:04.262 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:04.262 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:04.262 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.262 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:04.521 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:04.521 "name": "raid_bdev1", 00:30:04.521 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:30:04.521 "strip_size_kb": 0, 00:30:04.521 "state": "online", 00:30:04.521 "raid_level": "raid1", 00:30:04.521 "superblock": true, 00:30:04.521 "num_base_bdevs": 4, 00:30:04.521 "num_base_bdevs_discovered": 2, 00:30:04.521 "num_base_bdevs_operational": 2, 00:30:04.521 "base_bdevs_list": [ 00:30:04.521 { 00:30:04.521 "name": null, 00:30:04.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.521 "is_configured": false, 00:30:04.521 "data_offset": 2048, 00:30:04.521 "data_size": 63488 00:30:04.521 }, 00:30:04.521 { 00:30:04.521 "name": null, 00:30:04.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.521 "is_configured": false, 00:30:04.521 "data_offset": 2048, 00:30:04.521 "data_size": 63488 00:30:04.521 }, 00:30:04.521 { 00:30:04.521 "name": "BaseBdev3", 00:30:04.521 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:30:04.521 "is_configured": true, 00:30:04.521 "data_offset": 2048, 00:30:04.521 "data_size": 63488 00:30:04.521 }, 00:30:04.521 { 00:30:04.521 "name": "BaseBdev4", 00:30:04.521 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:30:04.521 "is_configured": true, 00:30:04.521 "data_offset": 2048, 00:30:04.521 "data_size": 63488 00:30:04.521 } 00:30:04.521 ] 00:30:04.521 }' 00:30:04.521 11:12:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:04.521 11:12:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.088 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:05.088 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:05.088 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:05.088 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:05.089 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:05.089 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.089 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.347 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:05.347 "name": "raid_bdev1", 00:30:05.347 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:30:05.347 "strip_size_kb": 0, 00:30:05.347 "state": "online", 00:30:05.347 "raid_level": "raid1", 00:30:05.347 "superblock": true, 00:30:05.347 "num_base_bdevs": 4, 00:30:05.347 "num_base_bdevs_discovered": 2, 00:30:05.347 "num_base_bdevs_operational": 2, 00:30:05.347 "base_bdevs_list": [ 00:30:05.347 { 00:30:05.347 "name": null, 00:30:05.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.347 "is_configured": false, 00:30:05.347 "data_offset": 2048, 00:30:05.347 "data_size": 63488 00:30:05.347 }, 00:30:05.347 { 00:30:05.347 "name": null, 00:30:05.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.347 "is_configured": false, 00:30:05.348 "data_offset": 2048, 00:30:05.348 "data_size": 63488 00:30:05.348 }, 00:30:05.348 { 00:30:05.348 "name": "BaseBdev3", 00:30:05.348 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:30:05.348 "is_configured": true, 00:30:05.348 "data_offset": 2048, 00:30:05.348 "data_size": 63488 00:30:05.348 }, 00:30:05.348 { 00:30:05.348 "name": "BaseBdev4", 00:30:05.348 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:30:05.348 "is_configured": true, 00:30:05.348 "data_offset": 2048, 00:30:05.348 "data_size": 63488 00:30:05.348 } 00:30:05.348 ] 00:30:05.348 }' 00:30:05.348 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:05.348 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:05.348 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:05.348 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:05.348 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:05.607 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:05.865 [2024-07-25 11:12:12.777561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:05.865 [2024-07-25 11:12:12.777642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:05.865 [2024-07-25 11:12:12.777672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045680 00:30:05.865 [2024-07-25 11:12:12.777691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:05.865 [2024-07-25 11:12:12.778308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:05.865 [2024-07-25 11:12:12.778337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:05.865 [2024-07-25 11:12:12.778441] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:05.865 [2024-07-25 11:12:12.778464] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:05.865 [2024-07-25 11:12:12.778478] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:05.865 BaseBdev1 00:30:05.865 11:12:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@789 -- # sleep 1 00:30:06.799 11:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:06.799 11:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:06.799 11:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:06.799 11:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:06.799 11:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:06.799 11:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:06.799 11:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:06.799 11:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:06.799 11:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:06.799 11:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:06.799 11:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.799 11:12:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.057 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:07.057 "name": "raid_bdev1", 00:30:07.057 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:30:07.057 "strip_size_kb": 0, 00:30:07.057 "state": "online", 00:30:07.057 "raid_level": "raid1", 00:30:07.057 "superblock": true, 00:30:07.057 "num_base_bdevs": 4, 00:30:07.057 "num_base_bdevs_discovered": 2, 00:30:07.057 "num_base_bdevs_operational": 2, 00:30:07.057 "base_bdevs_list": [ 00:30:07.057 { 00:30:07.057 "name": null, 00:30:07.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.057 "is_configured": false, 00:30:07.057 "data_offset": 2048, 00:30:07.057 "data_size": 63488 00:30:07.057 }, 00:30:07.057 { 00:30:07.057 "name": null, 00:30:07.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.057 "is_configured": false, 00:30:07.057 "data_offset": 2048, 00:30:07.057 "data_size": 63488 00:30:07.057 }, 00:30:07.057 { 00:30:07.057 "name": "BaseBdev3", 00:30:07.057 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:30:07.057 "is_configured": true, 00:30:07.057 "data_offset": 2048, 00:30:07.057 "data_size": 63488 00:30:07.057 }, 00:30:07.057 { 00:30:07.057 "name": "BaseBdev4", 00:30:07.057 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:30:07.057 "is_configured": true, 00:30:07.057 "data_offset": 2048, 00:30:07.057 "data_size": 63488 00:30:07.057 } 00:30:07.057 ] 00:30:07.057 }' 00:30:07.057 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:07.057 11:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.624 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:07.624 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:07.624 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:07.624 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:07.624 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:07.624 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.624 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:07.883 "name": "raid_bdev1", 00:30:07.883 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:30:07.883 "strip_size_kb": 0, 00:30:07.883 "state": "online", 00:30:07.883 "raid_level": "raid1", 00:30:07.883 "superblock": true, 00:30:07.883 "num_base_bdevs": 4, 00:30:07.883 "num_base_bdevs_discovered": 2, 00:30:07.883 "num_base_bdevs_operational": 2, 00:30:07.883 "base_bdevs_list": [ 00:30:07.883 { 00:30:07.883 "name": null, 00:30:07.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.883 "is_configured": false, 00:30:07.883 "data_offset": 2048, 00:30:07.883 "data_size": 63488 00:30:07.883 }, 00:30:07.883 { 00:30:07.883 "name": null, 00:30:07.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.883 "is_configured": false, 00:30:07.883 "data_offset": 2048, 00:30:07.883 "data_size": 63488 00:30:07.883 }, 00:30:07.883 { 00:30:07.883 "name": "BaseBdev3", 00:30:07.883 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:30:07.883 "is_configured": true, 00:30:07.883 "data_offset": 2048, 00:30:07.883 "data_size": 63488 00:30:07.883 }, 00:30:07.883 { 00:30:07.883 "name": "BaseBdev4", 00:30:07.883 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:30:07.883 "is_configured": true, 00:30:07.883 "data_offset": 2048, 00:30:07.883 "data_size": 63488 00:30:07.883 } 00:30:07.883 ] 00:30:07.883 }' 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:30:07.883 11:12:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:08.141 [2024-07-25 11:12:15.131960] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:08.141 [2024-07-25 11:12:15.132153] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:08.141 [2024-07-25 11:12:15.132185] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:08.141 request: 00:30:08.141 { 00:30:08.141 "base_bdev": "BaseBdev1", 00:30:08.141 "raid_bdev": "raid_bdev1", 00:30:08.141 "method": "bdev_raid_add_base_bdev", 00:30:08.141 "req_id": 1 00:30:08.141 } 00:30:08.141 Got JSON-RPC error response 00:30:08.141 response: 00:30:08.141 { 00:30:08.141 "code": -22, 00:30:08.141 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:08.141 } 00:30:08.141 11:12:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:30:08.141 11:12:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:08.141 11:12:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:08.141 11:12:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:08.141 11:12:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@793 -- # sleep 1 00:30:09.076 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:09.076 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:09.076 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:09.076 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:09.076 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:09.076 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:09.076 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:09.076 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:09.076 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:09.076 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:09.076 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.076 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.335 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:09.335 "name": "raid_bdev1", 00:30:09.335 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:30:09.335 "strip_size_kb": 0, 00:30:09.335 "state": "online", 00:30:09.335 "raid_level": "raid1", 00:30:09.335 "superblock": true, 00:30:09.335 "num_base_bdevs": 4, 00:30:09.335 "num_base_bdevs_discovered": 2, 00:30:09.335 "num_base_bdevs_operational": 2, 00:30:09.335 "base_bdevs_list": [ 00:30:09.335 { 00:30:09.335 "name": null, 00:30:09.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.335 "is_configured": false, 00:30:09.335 "data_offset": 2048, 00:30:09.335 "data_size": 63488 00:30:09.335 }, 00:30:09.335 { 00:30:09.335 "name": null, 00:30:09.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.335 "is_configured": false, 00:30:09.335 "data_offset": 2048, 00:30:09.335 "data_size": 63488 00:30:09.335 }, 00:30:09.335 { 00:30:09.335 "name": "BaseBdev3", 00:30:09.335 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:30:09.335 "is_configured": true, 00:30:09.335 "data_offset": 2048, 00:30:09.335 "data_size": 63488 00:30:09.335 }, 00:30:09.335 { 00:30:09.335 "name": "BaseBdev4", 00:30:09.335 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:30:09.335 "is_configured": true, 00:30:09.335 "data_offset": 2048, 00:30:09.335 "data_size": 63488 00:30:09.335 } 00:30:09.335 ] 00:30:09.335 }' 00:30:09.335 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:09.335 11:12:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.903 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:09.903 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:09.903 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:09.903 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:09.903 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:09.903 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.903 11:12:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.161 11:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:10.161 "name": "raid_bdev1", 00:30:10.161 "uuid": "95f61a07-0da6-4534-a7d7-6f213d8050f3", 00:30:10.161 "strip_size_kb": 0, 00:30:10.161 "state": "online", 00:30:10.161 "raid_level": "raid1", 00:30:10.161 "superblock": true, 00:30:10.161 "num_base_bdevs": 4, 00:30:10.161 "num_base_bdevs_discovered": 2, 00:30:10.161 "num_base_bdevs_operational": 2, 00:30:10.161 "base_bdevs_list": [ 00:30:10.161 { 00:30:10.161 "name": null, 00:30:10.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.161 "is_configured": false, 00:30:10.161 "data_offset": 2048, 00:30:10.161 "data_size": 63488 00:30:10.161 }, 00:30:10.161 { 00:30:10.161 "name": null, 00:30:10.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.161 "is_configured": false, 00:30:10.161 "data_offset": 2048, 00:30:10.161 "data_size": 63488 00:30:10.161 }, 00:30:10.161 { 00:30:10.161 "name": "BaseBdev3", 00:30:10.161 "uuid": "1831640f-5016-587b-8264-8ad950043a8a", 00:30:10.161 "is_configured": true, 00:30:10.161 "data_offset": 2048, 00:30:10.161 "data_size": 63488 00:30:10.161 }, 00:30:10.161 { 00:30:10.161 "name": "BaseBdev4", 00:30:10.161 "uuid": "e60bb6b1-23f7-5c5a-8013-348888bf1482", 00:30:10.161 "is_configured": true, 00:30:10.161 "data_offset": 2048, 00:30:10.161 "data_size": 63488 00:30:10.161 } 00:30:10.161 ] 00:30:10.161 }' 00:30:10.161 11:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:10.161 11:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:10.161 11:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:10.420 11:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:10.420 11:12:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@798 -- # killprocess 3719771 00:30:10.420 11:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 3719771 ']' 00:30:10.420 11:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 3719771 00:30:10.420 11:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:30:10.420 11:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:10.420 11:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3719771 00:30:10.420 11:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:10.420 11:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:10.420 11:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3719771' 00:30:10.420 killing process with pid 3719771 00:30:10.420 11:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 3719771 00:30:10.420 Received shutdown signal, test time was about 60.000000 seconds 00:30:10.420 00:30:10.420 Latency(us) 00:30:10.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.420 =================================================================================================================== 00:30:10.420 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:10.420 [2024-07-25 11:12:17.339953] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:10.420 11:12:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 3719771 00:30:10.420 [2024-07-25 11:12:17.340097] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:10.420 [2024-07-25 11:12:17.340186] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:10.420 [2024-07-25 11:12:17.340206] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:30:10.987 [2024-07-25 11:12:17.914746] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@800 -- # return 0 00:30:12.892 00:30:12.892 real 0m38.991s 00:30:12.892 user 0m55.247s 00:30:12.892 sys 0m6.641s 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.892 ************************************ 00:30:12.892 END TEST raid_rebuild_test_sb 00:30:12.892 ************************************ 00:30:12.892 11:12:19 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:30:12.892 11:12:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:12.892 11:12:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:12.892 11:12:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:12.892 ************************************ 00:30:12.892 START TEST raid_rebuild_test_io 00:30:12.892 ************************************ 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@586 -- # local superblock=false 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:12.892 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # '[' false = true ']' 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # raid_pid=3727199 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 3727199 /var/tmp/spdk-raid.sock 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 3727199 ']' 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:12.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:12.893 11:12:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:12.893 [2024-07-25 11:12:19.842081] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:12.893 [2024-07-25 11:12:19.842211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727199 ] 00:30:12.893 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:12.893 Zero copy mechanism will not be used. 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:12.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:12.893 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:13.152 [2024-07-25 11:12:20.071951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.412 [2024-07-25 11:12:20.333517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.670 [2024-07-25 11:12:20.665041] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:13.670 [2024-07-25 11:12:20.665076] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:13.929 11:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:13.929 11:12:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:30:13.929 11:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:13.929 11:12:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:14.187 BaseBdev1_malloc 00:30:14.187 11:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:14.446 [2024-07-25 11:12:21.308847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:14.446 [2024-07-25 11:12:21.308915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:14.446 [2024-07-25 11:12:21.308944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:30:14.446 [2024-07-25 11:12:21.308963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:14.446 [2024-07-25 11:12:21.311692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:14.446 [2024-07-25 11:12:21.311731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:14.446 BaseBdev1 00:30:14.446 11:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:14.446 11:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:14.704 BaseBdev2_malloc 00:30:14.704 11:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:14.704 [2024-07-25 11:12:21.800948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:14.704 [2024-07-25 11:12:21.801005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:14.704 [2024-07-25 11:12:21.801031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:30:14.704 [2024-07-25 11:12:21.801052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:14.704 [2024-07-25 11:12:21.803712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:14.704 [2024-07-25 11:12:21.803747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:14.704 BaseBdev2 00:30:14.704 11:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:14.704 11:12:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:15.272 BaseBdev3_malloc 00:30:15.272 11:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:15.272 [2024-07-25 11:12:22.297349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:15.272 [2024-07-25 11:12:22.297410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:15.272 [2024-07-25 11:12:22.297438] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:30:15.272 [2024-07-25 11:12:22.297456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:15.272 [2024-07-25 11:12:22.300133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:15.272 [2024-07-25 11:12:22.300177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:15.272 BaseBdev3 00:30:15.272 11:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:15.272 11:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:15.531 BaseBdev4_malloc 00:30:15.531 11:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:15.788 [2024-07-25 11:12:22.806942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:15.788 [2024-07-25 11:12:22.807005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:15.788 [2024-07-25 11:12:22.807035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:30:15.788 [2024-07-25 11:12:22.807054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:15.788 [2024-07-25 11:12:22.809755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:15.788 [2024-07-25 11:12:22.809790] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:15.788 BaseBdev4 00:30:15.788 11:12:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:16.047 spare_malloc 00:30:16.047 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:16.306 spare_delay 00:30:16.306 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:16.565 [2024-07-25 11:12:23.531387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:16.565 [2024-07-25 11:12:23.531442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:16.565 [2024-07-25 11:12:23.531466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:30:16.565 [2024-07-25 11:12:23.531483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:16.565 [2024-07-25 11:12:23.534187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:16.565 [2024-07-25 11:12:23.534220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:16.565 spare 00:30:16.565 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:16.824 [2024-07-25 11:12:23.752047] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:16.824 [2024-07-25 11:12:23.754317] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:16.824 [2024-07-25 11:12:23.754386] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:16.824 [2024-07-25 11:12:23.754454] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:16.824 [2024-07-25 11:12:23.754552] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:16.824 [2024-07-25 11:12:23.754571] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:16.824 [2024-07-25 11:12:23.754921] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:30:16.824 [2024-07-25 11:12:23.755177] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:16.824 [2024-07-25 11:12:23.755197] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:16.824 [2024-07-25 11:12:23.755408] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:16.824 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:16.824 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:16.824 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:16.824 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:16.824 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:16.824 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:16.824 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:16.824 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:16.824 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:16.824 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:16.824 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.824 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.083 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:17.083 "name": "raid_bdev1", 00:30:17.083 "uuid": "f125ecba-f612-4f6e-a69b-23cf4fc35dc4", 00:30:17.083 "strip_size_kb": 0, 00:30:17.083 "state": "online", 00:30:17.083 "raid_level": "raid1", 00:30:17.083 "superblock": false, 00:30:17.083 "num_base_bdevs": 4, 00:30:17.083 "num_base_bdevs_discovered": 4, 00:30:17.083 "num_base_bdevs_operational": 4, 00:30:17.083 "base_bdevs_list": [ 00:30:17.083 { 00:30:17.083 "name": "BaseBdev1", 00:30:17.083 "uuid": "fda14fda-7b4e-5581-aad2-a21c3d9f031a", 00:30:17.083 "is_configured": true, 00:30:17.083 "data_offset": 0, 00:30:17.083 "data_size": 65536 00:30:17.083 }, 00:30:17.083 { 00:30:17.083 "name": "BaseBdev2", 00:30:17.083 "uuid": "7171dbc2-14d2-58cd-818c-98236cc1619d", 00:30:17.083 "is_configured": true, 00:30:17.083 "data_offset": 0, 00:30:17.083 "data_size": 65536 00:30:17.083 }, 00:30:17.083 { 00:30:17.083 "name": "BaseBdev3", 00:30:17.083 "uuid": "52237225-d475-5b76-8be4-f739555050e7", 00:30:17.083 "is_configured": true, 00:30:17.083 "data_offset": 0, 00:30:17.083 "data_size": 65536 00:30:17.083 }, 00:30:17.083 { 00:30:17.083 "name": "BaseBdev4", 00:30:17.083 "uuid": "17c1019c-e5db-5024-b679-ade49e38fc93", 00:30:17.083 "is_configured": true, 00:30:17.083 "data_offset": 0, 00:30:17.083 "data_size": 65536 00:30:17.083 } 00:30:17.083 ] 00:30:17.084 }' 00:30:17.084 11:12:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:17.084 11:12:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:17.651 11:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:17.651 11:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:30:17.910 [2024-07-25 11:12:24.787259] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:17.910 11:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=65536 00:30:17.910 11:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.910 11:12:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@634 -- # data_offset=0 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:18.170 [2024-07-25 11:12:25.152246] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:30:18.170 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:18.170 Zero copy mechanism will not be used. 00:30:18.170 Running I/O for 60 seconds... 00:30:18.170 [2024-07-25 11:12:25.195706] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:18.170 [2024-07-25 11:12:25.203589] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.170 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.429 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:18.429 "name": "raid_bdev1", 00:30:18.429 "uuid": "f125ecba-f612-4f6e-a69b-23cf4fc35dc4", 00:30:18.429 "strip_size_kb": 0, 00:30:18.429 "state": "online", 00:30:18.429 "raid_level": "raid1", 00:30:18.429 "superblock": false, 00:30:18.429 "num_base_bdevs": 4, 00:30:18.429 "num_base_bdevs_discovered": 3, 00:30:18.429 "num_base_bdevs_operational": 3, 00:30:18.429 "base_bdevs_list": [ 00:30:18.429 { 00:30:18.429 "name": null, 00:30:18.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:18.429 "is_configured": false, 00:30:18.429 "data_offset": 0, 00:30:18.429 "data_size": 65536 00:30:18.429 }, 00:30:18.429 { 00:30:18.429 "name": "BaseBdev2", 00:30:18.429 "uuid": "7171dbc2-14d2-58cd-818c-98236cc1619d", 00:30:18.429 "is_configured": true, 00:30:18.429 "data_offset": 0, 00:30:18.429 "data_size": 65536 00:30:18.429 }, 00:30:18.429 { 00:30:18.429 "name": "BaseBdev3", 00:30:18.429 "uuid": "52237225-d475-5b76-8be4-f739555050e7", 00:30:18.429 "is_configured": true, 00:30:18.429 "data_offset": 0, 00:30:18.429 "data_size": 65536 00:30:18.429 }, 00:30:18.429 { 00:30:18.429 "name": "BaseBdev4", 00:30:18.429 "uuid": "17c1019c-e5db-5024-b679-ade49e38fc93", 00:30:18.429 "is_configured": true, 00:30:18.429 "data_offset": 0, 00:30:18.429 "data_size": 65536 00:30:18.429 } 00:30:18.429 ] 00:30:18.429 }' 00:30:18.429 11:12:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:18.429 11:12:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:18.997 11:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:19.256 [2024-07-25 11:12:26.302039] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:19.256 11:12:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:19.514 [2024-07-25 11:12:26.379050] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:30:19.515 [2024-07-25 11:12:26.381493] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:19.515 [2024-07-25 11:12:26.509588] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:19.515 [2024-07-25 11:12:26.510944] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:19.773 [2024-07-25 11:12:26.716851] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:19.773 [2024-07-25 11:12:26.717514] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:20.340 [2024-07-25 11:12:27.179182] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:20.340 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:20.340 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:20.340 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:20.340 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:20.340 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:20.340 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.340 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.599 [2024-07-25 11:12:27.544519] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:20.599 [2024-07-25 11:12:27.544853] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:20.599 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:20.599 "name": "raid_bdev1", 00:30:20.599 "uuid": "f125ecba-f612-4f6e-a69b-23cf4fc35dc4", 00:30:20.599 "strip_size_kb": 0, 00:30:20.599 "state": "online", 00:30:20.599 "raid_level": "raid1", 00:30:20.599 "superblock": false, 00:30:20.599 "num_base_bdevs": 4, 00:30:20.599 "num_base_bdevs_discovered": 4, 00:30:20.599 "num_base_bdevs_operational": 4, 00:30:20.599 "process": { 00:30:20.599 "type": "rebuild", 00:30:20.599 "target": "spare", 00:30:20.599 "progress": { 00:30:20.599 "blocks": 14336, 00:30:20.599 "percent": 21 00:30:20.599 } 00:30:20.599 }, 00:30:20.599 "base_bdevs_list": [ 00:30:20.599 { 00:30:20.599 "name": "spare", 00:30:20.599 "uuid": "5210287f-9de7-53c1-b54b-d938546e64d7", 00:30:20.599 "is_configured": true, 00:30:20.599 "data_offset": 0, 00:30:20.599 "data_size": 65536 00:30:20.599 }, 00:30:20.599 { 00:30:20.599 "name": "BaseBdev2", 00:30:20.599 "uuid": "7171dbc2-14d2-58cd-818c-98236cc1619d", 00:30:20.599 "is_configured": true, 00:30:20.599 "data_offset": 0, 00:30:20.599 "data_size": 65536 00:30:20.599 }, 00:30:20.599 { 00:30:20.599 "name": "BaseBdev3", 00:30:20.599 "uuid": "52237225-d475-5b76-8be4-f739555050e7", 00:30:20.599 "is_configured": true, 00:30:20.599 "data_offset": 0, 00:30:20.599 "data_size": 65536 00:30:20.599 }, 00:30:20.599 { 00:30:20.599 "name": "BaseBdev4", 00:30:20.599 "uuid": "17c1019c-e5db-5024-b679-ade49e38fc93", 00:30:20.599 "is_configured": true, 00:30:20.599 "data_offset": 0, 00:30:20.599 "data_size": 65536 00:30:20.599 } 00:30:20.599 ] 00:30:20.599 }' 00:30:20.600 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:20.600 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:20.600 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:20.600 [2024-07-25 11:12:27.686822] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:20.600 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:20.600 11:12:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:20.858 [2024-07-25 11:12:27.896915] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:21.118 [2024-07-25 11:12:28.055507] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:21.118 [2024-07-25 11:12:28.068684] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:21.118 [2024-07-25 11:12:28.068741] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:21.118 [2024-07-25 11:12:28.068758] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:21.118 [2024-07-25 11:12:28.116681] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:30:21.118 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:21.118 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:21.118 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:21.118 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:21.118 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:21.118 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:21.118 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:21.118 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:21.118 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:21.118 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:21.118 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.118 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:21.378 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:21.378 "name": "raid_bdev1", 00:30:21.378 "uuid": "f125ecba-f612-4f6e-a69b-23cf4fc35dc4", 00:30:21.378 "strip_size_kb": 0, 00:30:21.378 "state": "online", 00:30:21.378 "raid_level": "raid1", 00:30:21.378 "superblock": false, 00:30:21.378 "num_base_bdevs": 4, 00:30:21.378 "num_base_bdevs_discovered": 3, 00:30:21.378 "num_base_bdevs_operational": 3, 00:30:21.378 "base_bdevs_list": [ 00:30:21.378 { 00:30:21.378 "name": null, 00:30:21.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.378 "is_configured": false, 00:30:21.378 "data_offset": 0, 00:30:21.378 "data_size": 65536 00:30:21.378 }, 00:30:21.378 { 00:30:21.378 "name": "BaseBdev2", 00:30:21.378 "uuid": "7171dbc2-14d2-58cd-818c-98236cc1619d", 00:30:21.378 "is_configured": true, 00:30:21.378 "data_offset": 0, 00:30:21.378 "data_size": 65536 00:30:21.378 }, 00:30:21.378 { 00:30:21.378 "name": "BaseBdev3", 00:30:21.378 "uuid": "52237225-d475-5b76-8be4-f739555050e7", 00:30:21.378 "is_configured": true, 00:30:21.378 "data_offset": 0, 00:30:21.378 "data_size": 65536 00:30:21.378 }, 00:30:21.378 { 00:30:21.378 "name": "BaseBdev4", 00:30:21.378 "uuid": "17c1019c-e5db-5024-b679-ade49e38fc93", 00:30:21.378 "is_configured": true, 00:30:21.378 "data_offset": 0, 00:30:21.378 "data_size": 65536 00:30:21.378 } 00:30:21.378 ] 00:30:21.378 }' 00:30:21.378 11:12:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:21.378 11:12:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:21.945 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:21.945 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:21.945 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:21.945 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:21.945 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:21.945 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.945 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:22.203 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:22.203 "name": "raid_bdev1", 00:30:22.203 "uuid": "f125ecba-f612-4f6e-a69b-23cf4fc35dc4", 00:30:22.203 "strip_size_kb": 0, 00:30:22.203 "state": "online", 00:30:22.203 "raid_level": "raid1", 00:30:22.203 "superblock": false, 00:30:22.203 "num_base_bdevs": 4, 00:30:22.203 "num_base_bdevs_discovered": 3, 00:30:22.203 "num_base_bdevs_operational": 3, 00:30:22.203 "base_bdevs_list": [ 00:30:22.203 { 00:30:22.203 "name": null, 00:30:22.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:22.203 "is_configured": false, 00:30:22.203 "data_offset": 0, 00:30:22.203 "data_size": 65536 00:30:22.203 }, 00:30:22.203 { 00:30:22.203 "name": "BaseBdev2", 00:30:22.203 "uuid": "7171dbc2-14d2-58cd-818c-98236cc1619d", 00:30:22.203 "is_configured": true, 00:30:22.203 "data_offset": 0, 00:30:22.203 "data_size": 65536 00:30:22.203 }, 00:30:22.204 { 00:30:22.204 "name": "BaseBdev3", 00:30:22.204 "uuid": "52237225-d475-5b76-8be4-f739555050e7", 00:30:22.204 "is_configured": true, 00:30:22.204 "data_offset": 0, 00:30:22.204 "data_size": 65536 00:30:22.204 }, 00:30:22.204 { 00:30:22.204 "name": "BaseBdev4", 00:30:22.204 "uuid": "17c1019c-e5db-5024-b679-ade49e38fc93", 00:30:22.204 "is_configured": true, 00:30:22.204 "data_offset": 0, 00:30:22.204 "data_size": 65536 00:30:22.204 } 00:30:22.204 ] 00:30:22.204 }' 00:30:22.204 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:22.204 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:22.204 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:22.462 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:22.462 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:22.462 [2024-07-25 11:12:29.548426] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:22.721 11:12:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:30:22.721 [2024-07-25 11:12:29.630711] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010bf0 00:30:22.721 [2024-07-25 11:12:29.633119] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:22.721 [2024-07-25 11:12:29.734898] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:22.721 [2024-07-25 11:12:29.735250] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:22.980 [2024-07-25 11:12:29.987729] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:22.980 [2024-07-25 11:12:29.988408] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:23.546 [2024-07-25 11:12:30.368307] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:23.546 [2024-07-25 11:12:30.368639] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:23.546 [2024-07-25 11:12:30.598615] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:23.546 [2024-07-25 11:12:30.598885] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:23.546 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:23.546 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:23.546 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:23.546 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:23.546 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:23.546 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.546 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.805 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:23.805 "name": "raid_bdev1", 00:30:23.805 "uuid": "f125ecba-f612-4f6e-a69b-23cf4fc35dc4", 00:30:23.805 "strip_size_kb": 0, 00:30:23.805 "state": "online", 00:30:23.805 "raid_level": "raid1", 00:30:23.805 "superblock": false, 00:30:23.805 "num_base_bdevs": 4, 00:30:23.805 "num_base_bdevs_discovered": 4, 00:30:23.805 "num_base_bdevs_operational": 4, 00:30:23.805 "process": { 00:30:23.805 "type": "rebuild", 00:30:23.805 "target": "spare", 00:30:23.805 "progress": { 00:30:23.805 "blocks": 12288, 00:30:23.805 "percent": 18 00:30:23.805 } 00:30:23.805 }, 00:30:23.805 "base_bdevs_list": [ 00:30:23.805 { 00:30:23.805 "name": "spare", 00:30:23.805 "uuid": "5210287f-9de7-53c1-b54b-d938546e64d7", 00:30:23.805 "is_configured": true, 00:30:23.805 "data_offset": 0, 00:30:23.805 "data_size": 65536 00:30:23.805 }, 00:30:23.805 { 00:30:23.805 "name": "BaseBdev2", 00:30:23.805 "uuid": "7171dbc2-14d2-58cd-818c-98236cc1619d", 00:30:23.805 "is_configured": true, 00:30:23.805 "data_offset": 0, 00:30:23.805 "data_size": 65536 00:30:23.805 }, 00:30:23.805 { 00:30:23.805 "name": "BaseBdev3", 00:30:23.805 "uuid": "52237225-d475-5b76-8be4-f739555050e7", 00:30:23.805 "is_configured": true, 00:30:23.805 "data_offset": 0, 00:30:23.805 "data_size": 65536 00:30:23.805 }, 00:30:23.805 { 00:30:23.805 "name": "BaseBdev4", 00:30:23.805 "uuid": "17c1019c-e5db-5024-b679-ade49e38fc93", 00:30:23.805 "is_configured": true, 00:30:23.805 "data_offset": 0, 00:30:23.805 "data_size": 65536 00:30:23.805 } 00:30:23.805 ] 00:30:23.805 }' 00:30:23.805 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:23.805 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:23.805 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:24.064 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:24.064 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@681 -- # '[' false = true ']' 00:30:24.064 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:30:24.064 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:30:24.064 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:30:24.064 11:12:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:24.064 [2024-07-25 11:12:30.953013] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:24.064 [2024-07-25 11:12:31.082155] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:24.064 [2024-07-25 11:12:31.082460] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:24.064 [2024-07-25 11:12:31.150555] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:24.323 [2024-07-25 11:12:31.334498] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010a50 00:30:24.323 [2024-07-25 11:12:31.334536] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010bf0 00:30:24.323 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:30:24.323 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:30:24.323 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:24.323 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:24.323 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:24.323 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:24.323 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:24.323 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.323 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.619 [2024-07-25 11:12:31.464579] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:24.619 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:24.619 "name": "raid_bdev1", 00:30:24.619 "uuid": "f125ecba-f612-4f6e-a69b-23cf4fc35dc4", 00:30:24.619 "strip_size_kb": 0, 00:30:24.619 "state": "online", 00:30:24.619 "raid_level": "raid1", 00:30:24.619 "superblock": false, 00:30:24.619 "num_base_bdevs": 4, 00:30:24.619 "num_base_bdevs_discovered": 3, 00:30:24.619 "num_base_bdevs_operational": 3, 00:30:24.619 "process": { 00:30:24.619 "type": "rebuild", 00:30:24.619 "target": "spare", 00:30:24.619 "progress": { 00:30:24.619 "blocks": 20480, 00:30:24.619 "percent": 31 00:30:24.619 } 00:30:24.619 }, 00:30:24.619 "base_bdevs_list": [ 00:30:24.619 { 00:30:24.619 "name": "spare", 00:30:24.619 "uuid": "5210287f-9de7-53c1-b54b-d938546e64d7", 00:30:24.619 "is_configured": true, 00:30:24.619 "data_offset": 0, 00:30:24.619 "data_size": 65536 00:30:24.619 }, 00:30:24.619 { 00:30:24.619 "name": null, 00:30:24.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.619 "is_configured": false, 00:30:24.619 "data_offset": 0, 00:30:24.619 "data_size": 65536 00:30:24.619 }, 00:30:24.620 { 00:30:24.620 "name": "BaseBdev3", 00:30:24.620 "uuid": "52237225-d475-5b76-8be4-f739555050e7", 00:30:24.620 "is_configured": true, 00:30:24.620 "data_offset": 0, 00:30:24.620 "data_size": 65536 00:30:24.620 }, 00:30:24.620 { 00:30:24.620 "name": "BaseBdev4", 00:30:24.620 "uuid": "17c1019c-e5db-5024-b679-ade49e38fc93", 00:30:24.620 "is_configured": true, 00:30:24.620 "data_offset": 0, 00:30:24.620 "data_size": 65536 00:30:24.620 } 00:30:24.620 ] 00:30:24.620 }' 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:24.620 [2024-07-25 11:12:31.683091] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # local timeout=1035 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.620 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.892 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:24.892 "name": "raid_bdev1", 00:30:24.892 "uuid": "f125ecba-f612-4f6e-a69b-23cf4fc35dc4", 00:30:24.892 "strip_size_kb": 0, 00:30:24.892 "state": "online", 00:30:24.892 "raid_level": "raid1", 00:30:24.892 "superblock": false, 00:30:24.892 "num_base_bdevs": 4, 00:30:24.892 "num_base_bdevs_discovered": 3, 00:30:24.892 "num_base_bdevs_operational": 3, 00:30:24.892 "process": { 00:30:24.892 "type": "rebuild", 00:30:24.892 "target": "spare", 00:30:24.892 "progress": { 00:30:24.892 "blocks": 24576, 00:30:24.892 "percent": 37 00:30:24.892 } 00:30:24.892 }, 00:30:24.892 "base_bdevs_list": [ 00:30:24.892 { 00:30:24.892 "name": "spare", 00:30:24.892 "uuid": "5210287f-9de7-53c1-b54b-d938546e64d7", 00:30:24.892 "is_configured": true, 00:30:24.892 "data_offset": 0, 00:30:24.892 "data_size": 65536 00:30:24.892 }, 00:30:24.892 { 00:30:24.892 "name": null, 00:30:24.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.892 "is_configured": false, 00:30:24.892 "data_offset": 0, 00:30:24.892 "data_size": 65536 00:30:24.892 }, 00:30:24.892 { 00:30:24.892 "name": "BaseBdev3", 00:30:24.892 "uuid": "52237225-d475-5b76-8be4-f739555050e7", 00:30:24.892 "is_configured": true, 00:30:24.892 "data_offset": 0, 00:30:24.892 "data_size": 65536 00:30:24.892 }, 00:30:24.892 { 00:30:24.892 "name": "BaseBdev4", 00:30:24.892 "uuid": "17c1019c-e5db-5024-b679-ade49e38fc93", 00:30:24.892 "is_configured": true, 00:30:24.893 "data_offset": 0, 00:30:24.893 "data_size": 65536 00:30:24.893 } 00:30:24.893 ] 00:30:24.893 }' 00:30:24.893 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:24.893 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:24.893 11:12:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:25.151 11:12:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:25.151 11:12:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:25.410 [2024-07-25 11:12:32.386048] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:30:25.410 [2024-07-25 11:12:32.505755] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:30:25.669 [2024-07-25 11:12:32.728151] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:30:25.928 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:25.928 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:25.928 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:25.928 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:25.928 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:25.928 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:25.928 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.929 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.188 [2024-07-25 11:12:33.066694] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:30:26.188 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:26.188 "name": "raid_bdev1", 00:30:26.188 "uuid": "f125ecba-f612-4f6e-a69b-23cf4fc35dc4", 00:30:26.188 "strip_size_kb": 0, 00:30:26.188 "state": "online", 00:30:26.188 "raid_level": "raid1", 00:30:26.188 "superblock": false, 00:30:26.188 "num_base_bdevs": 4, 00:30:26.188 "num_base_bdevs_discovered": 3, 00:30:26.188 "num_base_bdevs_operational": 3, 00:30:26.188 "process": { 00:30:26.188 "type": "rebuild", 00:30:26.188 "target": "spare", 00:30:26.188 "progress": { 00:30:26.188 "blocks": 47104, 00:30:26.188 "percent": 71 00:30:26.188 } 00:30:26.188 }, 00:30:26.188 "base_bdevs_list": [ 00:30:26.188 { 00:30:26.188 "name": "spare", 00:30:26.188 "uuid": "5210287f-9de7-53c1-b54b-d938546e64d7", 00:30:26.188 "is_configured": true, 00:30:26.188 "data_offset": 0, 00:30:26.188 "data_size": 65536 00:30:26.188 }, 00:30:26.188 { 00:30:26.188 "name": null, 00:30:26.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.188 "is_configured": false, 00:30:26.188 "data_offset": 0, 00:30:26.188 "data_size": 65536 00:30:26.188 }, 00:30:26.188 { 00:30:26.188 "name": "BaseBdev3", 00:30:26.188 "uuid": "52237225-d475-5b76-8be4-f739555050e7", 00:30:26.188 "is_configured": true, 00:30:26.188 "data_offset": 0, 00:30:26.188 "data_size": 65536 00:30:26.188 }, 00:30:26.188 { 00:30:26.188 "name": "BaseBdev4", 00:30:26.188 "uuid": "17c1019c-e5db-5024-b679-ade49e38fc93", 00:30:26.188 "is_configured": true, 00:30:26.188 "data_offset": 0, 00:30:26.188 "data_size": 65536 00:30:26.188 } 00:30:26.188 ] 00:30:26.188 }' 00:30:26.188 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:26.446 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:26.446 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:26.446 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:26.446 11:12:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:26.446 [2024-07-25 11:12:33.398929] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:30:26.704 [2024-07-25 11:12:33.724853] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:30:26.964 [2024-07-25 11:12:33.944871] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:26.964 [2024-07-25 11:12:33.945288] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:27.534 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:27.534 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:27.534 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:27.534 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:27.534 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:27.534 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:27.534 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.534 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.534 [2024-07-25 11:12:34.387486] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:27.534 [2024-07-25 11:12:34.495426] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:27.534 [2024-07-25 11:12:34.499283] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:27.534 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:27.534 "name": "raid_bdev1", 00:30:27.534 "uuid": "f125ecba-f612-4f6e-a69b-23cf4fc35dc4", 00:30:27.534 "strip_size_kb": 0, 00:30:27.534 "state": "online", 00:30:27.534 "raid_level": "raid1", 00:30:27.534 "superblock": false, 00:30:27.534 "num_base_bdevs": 4, 00:30:27.534 "num_base_bdevs_discovered": 3, 00:30:27.534 "num_base_bdevs_operational": 3, 00:30:27.534 "base_bdevs_list": [ 00:30:27.534 { 00:30:27.534 "name": "spare", 00:30:27.534 "uuid": "5210287f-9de7-53c1-b54b-d938546e64d7", 00:30:27.534 "is_configured": true, 00:30:27.534 "data_offset": 0, 00:30:27.534 "data_size": 65536 00:30:27.534 }, 00:30:27.534 { 00:30:27.534 "name": null, 00:30:27.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.534 "is_configured": false, 00:30:27.534 "data_offset": 0, 00:30:27.534 "data_size": 65536 00:30:27.534 }, 00:30:27.534 { 00:30:27.534 "name": "BaseBdev3", 00:30:27.534 "uuid": "52237225-d475-5b76-8be4-f739555050e7", 00:30:27.534 "is_configured": true, 00:30:27.534 "data_offset": 0, 00:30:27.534 "data_size": 65536 00:30:27.534 }, 00:30:27.534 { 00:30:27.534 "name": "BaseBdev4", 00:30:27.534 "uuid": "17c1019c-e5db-5024-b679-ade49e38fc93", 00:30:27.534 "is_configured": true, 00:30:27.534 "data_offset": 0, 00:30:27.534 "data_size": 65536 00:30:27.534 } 00:30:27.534 ] 00:30:27.534 }' 00:30:27.534 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:27.793 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:27.793 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:27.793 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:27.793 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # break 00:30:27.793 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:27.793 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:27.793 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:27.793 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:27.793 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:27.793 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.793 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.053 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:28.053 "name": "raid_bdev1", 00:30:28.053 "uuid": "f125ecba-f612-4f6e-a69b-23cf4fc35dc4", 00:30:28.053 "strip_size_kb": 0, 00:30:28.053 "state": "online", 00:30:28.053 "raid_level": "raid1", 00:30:28.053 "superblock": false, 00:30:28.053 "num_base_bdevs": 4, 00:30:28.053 "num_base_bdevs_discovered": 3, 00:30:28.053 "num_base_bdevs_operational": 3, 00:30:28.053 "base_bdevs_list": [ 00:30:28.053 { 00:30:28.053 "name": "spare", 00:30:28.053 "uuid": "5210287f-9de7-53c1-b54b-d938546e64d7", 00:30:28.053 "is_configured": true, 00:30:28.053 "data_offset": 0, 00:30:28.053 "data_size": 65536 00:30:28.053 }, 00:30:28.053 { 00:30:28.053 "name": null, 00:30:28.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.053 "is_configured": false, 00:30:28.053 "data_offset": 0, 00:30:28.053 "data_size": 65536 00:30:28.053 }, 00:30:28.053 { 00:30:28.053 "name": "BaseBdev3", 00:30:28.053 "uuid": "52237225-d475-5b76-8be4-f739555050e7", 00:30:28.053 "is_configured": true, 00:30:28.053 "data_offset": 0, 00:30:28.053 "data_size": 65536 00:30:28.053 }, 00:30:28.053 { 00:30:28.053 "name": "BaseBdev4", 00:30:28.053 "uuid": "17c1019c-e5db-5024-b679-ade49e38fc93", 00:30:28.053 "is_configured": true, 00:30:28.053 "data_offset": 0, 00:30:28.053 "data_size": 65536 00:30:28.053 } 00:30:28.053 ] 00:30:28.053 }' 00:30:28.053 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:28.053 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:28.053 11:12:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.053 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:28.312 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:28.312 "name": "raid_bdev1", 00:30:28.313 "uuid": "f125ecba-f612-4f6e-a69b-23cf4fc35dc4", 00:30:28.313 "strip_size_kb": 0, 00:30:28.313 "state": "online", 00:30:28.313 "raid_level": "raid1", 00:30:28.313 "superblock": false, 00:30:28.313 "num_base_bdevs": 4, 00:30:28.313 "num_base_bdevs_discovered": 3, 00:30:28.313 "num_base_bdevs_operational": 3, 00:30:28.313 "base_bdevs_list": [ 00:30:28.313 { 00:30:28.313 "name": "spare", 00:30:28.313 "uuid": "5210287f-9de7-53c1-b54b-d938546e64d7", 00:30:28.313 "is_configured": true, 00:30:28.313 "data_offset": 0, 00:30:28.313 "data_size": 65536 00:30:28.313 }, 00:30:28.313 { 00:30:28.313 "name": null, 00:30:28.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.313 "is_configured": false, 00:30:28.313 "data_offset": 0, 00:30:28.313 "data_size": 65536 00:30:28.313 }, 00:30:28.313 { 00:30:28.313 "name": "BaseBdev3", 00:30:28.313 "uuid": "52237225-d475-5b76-8be4-f739555050e7", 00:30:28.313 "is_configured": true, 00:30:28.313 "data_offset": 0, 00:30:28.313 "data_size": 65536 00:30:28.313 }, 00:30:28.313 { 00:30:28.313 "name": "BaseBdev4", 00:30:28.313 "uuid": "17c1019c-e5db-5024-b679-ade49e38fc93", 00:30:28.313 "is_configured": true, 00:30:28.313 "data_offset": 0, 00:30:28.313 "data_size": 65536 00:30:28.313 } 00:30:28.313 ] 00:30:28.313 }' 00:30:28.313 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:28.313 11:12:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:28.881 11:12:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:29.141 [2024-07-25 11:12:36.040599] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:29.141 [2024-07-25 11:12:36.040639] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:29.141 00:30:29.141 Latency(us) 00:30:29.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.141 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:29.141 raid_bdev1 : 10.96 92.43 277.30 0.00 0.00 14456.97 332.60 122473.68 00:30:29.141 =================================================================================================================== 00:30:29.141 Total : 92.43 277.30 0.00 0.00 14456.97 332.60 122473.68 00:30:29.141 [2024-07-25 11:12:36.171427] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:29.141 [2024-07-25 11:12:36.171476] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:29.141 [2024-07-25 11:12:36.171591] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:29.141 [2024-07-25 11:12:36.171612] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:29.141 0 00:30:29.141 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.141 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # jq length 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:29.400 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:30:29.660 /dev/nbd0 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:29.660 1+0 records in 00:30:29.660 1+0 records out 00:30:29.660 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305032 s, 13.4 MB/s 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # continue 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:29.660 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:29.919 /dev/nbd1 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:29.919 1+0 records in 00:30:29.919 1+0 records out 00:30:29.919 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170615 s, 24.0 MB/s 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:29.919 11:12:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:30.178 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:30.178 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:30.178 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:30.178 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:30.178 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:30.178 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:30.178 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:30.437 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:30.697 /dev/nbd1 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:30.697 1+0 records in 00:30:30.697 1+0 records out 00:30:30.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291869 s, 14.0 MB/s 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@746 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:30.697 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:30.956 11:12:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@758 -- # '[' false = true ']' 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@798 -- # killprocess 3727199 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 3727199 ']' 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 3727199 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3727199 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3727199' 00:30:31.215 killing process with pid 3727199 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 3727199 00:30:31.215 Received shutdown signal, test time was about 13.109920 seconds 00:30:31.215 00:30:31.215 Latency(us) 00:30:31.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.215 =================================================================================================================== 00:30:31.215 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:31.215 [2024-07-25 11:12:38.296620] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:31.215 11:12:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 3727199 00:30:31.784 [2024-07-25 11:12:38.789518] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@800 -- # return 0 00:30:33.689 00:30:33.689 real 0m20.876s 00:30:33.689 user 0m30.397s 00:30:33.689 sys 0m3.418s 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.689 ************************************ 00:30:33.689 END TEST raid_rebuild_test_io 00:30:33.689 ************************************ 00:30:33.689 11:12:40 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:30:33.689 11:12:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:30:33.689 11:12:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:33.689 11:12:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:33.689 ************************************ 00:30:33.689 START TEST raid_rebuild_test_sb_io 00:30:33.689 ************************************ 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=4 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@587 -- # local background_io=true 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # local verify=true 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev3 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # echo BaseBdev4 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # local strip_size 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # local create_arg 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@594 -- # local data_offset 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # raid_pid=3730880 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # waitforlisten 3730880 /var/tmp/spdk-raid.sock 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 3730880 ']' 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:33.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:33.689 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:33.690 11:12:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:33.690 [2024-07-25 11:12:40.784672] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:33.690 [2024-07-25 11:12:40.784769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3730880 ] 00:30:33.690 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:33.690 Zero copy mechanism will not be used. 00:30:33.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.948 EAL: Requested device 0000:3d:01.0 cannot be used 00:30:33.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.948 EAL: Requested device 0000:3d:01.1 cannot be used 00:30:33.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.948 EAL: Requested device 0000:3d:01.2 cannot be used 00:30:33.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.948 EAL: Requested device 0000:3d:01.3 cannot be used 00:30:33.948 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.948 EAL: Requested device 0000:3d:01.4 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3d:01.5 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3d:01.6 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3d:01.7 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3d:02.0 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3d:02.1 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3d:02.2 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3d:02.3 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3d:02.4 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3d:02.5 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3d:02.6 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3d:02.7 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:01.0 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:01.1 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:01.2 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:01.3 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:01.4 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:01.5 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:01.6 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:01.7 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:02.0 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:02.1 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:02.2 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:02.3 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:02.4 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:02.5 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:02.6 cannot be used 00:30:33.949 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:30:33.949 EAL: Requested device 0000:3f:02.7 cannot be used 00:30:33.949 [2024-07-25 11:12:40.981899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.207 [2024-07-25 11:12:41.269917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.775 [2024-07-25 11:12:41.595683] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:34.775 [2024-07-25 11:12:41.595719] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:34.775 11:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:34.775 11:12:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:30:34.775 11:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:34.775 11:12:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:35.034 BaseBdev1_malloc 00:30:35.034 11:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:35.294 [2024-07-25 11:12:42.247551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:35.294 [2024-07-25 11:12:42.247619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:35.294 [2024-07-25 11:12:42.247648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:30:35.294 [2024-07-25 11:12:42.247666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:35.294 [2024-07-25 11:12:42.250256] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:35.294 [2024-07-25 11:12:42.250291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:35.294 BaseBdev1 00:30:35.294 11:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:35.294 11:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:35.553 BaseBdev2_malloc 00:30:35.553 11:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:35.813 [2024-07-25 11:12:42.733584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:35.813 [2024-07-25 11:12:42.733641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:35.813 [2024-07-25 11:12:42.733667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:30:35.813 [2024-07-25 11:12:42.733689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:35.813 [2024-07-25 11:12:42.736409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:35.813 [2024-07-25 11:12:42.736444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:35.813 BaseBdev2 00:30:35.813 11:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:35.813 11:12:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:36.072 BaseBdev3_malloc 00:30:36.072 11:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:36.330 [2024-07-25 11:12:43.223261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:36.330 [2024-07-25 11:12:43.223329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:36.330 [2024-07-25 11:12:43.223363] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:30:36.330 [2024-07-25 11:12:43.223381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:36.330 [2024-07-25 11:12:43.226076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:36.330 [2024-07-25 11:12:43.226111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:36.330 BaseBdev3 00:30:36.330 11:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:30:36.330 11:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:36.590 BaseBdev4_malloc 00:30:36.590 11:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:36.590 [2024-07-25 11:12:43.684066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:36.590 [2024-07-25 11:12:43.684133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:36.590 [2024-07-25 11:12:43.684167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041a80 00:30:36.590 [2024-07-25 11:12:43.684186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:36.590 [2024-07-25 11:12:43.686904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:36.590 [2024-07-25 11:12:43.686939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:36.590 BaseBdev4 00:30:36.590 11:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:36.848 spare_malloc 00:30:37.107 11:12:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:37.107 spare_delay 00:30:37.107 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:37.365 [2024-07-25 11:12:44.394552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:37.365 [2024-07-25 11:12:44.394609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:37.365 [2024-07-25 11:12:44.394634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:30:37.365 [2024-07-25 11:12:44.394652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:37.365 [2024-07-25 11:12:44.397166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:37.365 [2024-07-25 11:12:44.397200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:37.365 spare 00:30:37.366 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:37.624 [2024-07-25 11:12:44.627225] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:37.624 [2024-07-25 11:12:44.629417] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:37.624 [2024-07-25 11:12:44.629487] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:37.624 [2024-07-25 11:12:44.629552] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:37.624 [2024-07-25 11:12:44.629772] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:37.624 [2024-07-25 11:12:44.629790] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:37.624 [2024-07-25 11:12:44.630113] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:30:37.624 [2024-07-25 11:12:44.630380] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:37.624 [2024-07-25 11:12:44.630397] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:37.624 [2024-07-25 11:12:44.630598] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:37.624 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:37.624 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:37.624 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:37.624 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:37.624 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:37.624 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:37.624 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:37.624 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:37.624 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:37.624 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:37.624 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:37.624 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:37.883 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:37.883 "name": "raid_bdev1", 00:30:37.883 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:37.883 "strip_size_kb": 0, 00:30:37.883 "state": "online", 00:30:37.883 "raid_level": "raid1", 00:30:37.883 "superblock": true, 00:30:37.883 "num_base_bdevs": 4, 00:30:37.883 "num_base_bdevs_discovered": 4, 00:30:37.883 "num_base_bdevs_operational": 4, 00:30:37.883 "base_bdevs_list": [ 00:30:37.883 { 00:30:37.883 "name": "BaseBdev1", 00:30:37.883 "uuid": "3c8c68f0-a411-56b4-9b58-15b6d91e7001", 00:30:37.883 "is_configured": true, 00:30:37.883 "data_offset": 2048, 00:30:37.883 "data_size": 63488 00:30:37.883 }, 00:30:37.883 { 00:30:37.883 "name": "BaseBdev2", 00:30:37.883 "uuid": "72f98486-e53d-5648-855f-77b88918de01", 00:30:37.883 "is_configured": true, 00:30:37.883 "data_offset": 2048, 00:30:37.883 "data_size": 63488 00:30:37.883 }, 00:30:37.883 { 00:30:37.883 "name": "BaseBdev3", 00:30:37.883 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:37.883 "is_configured": true, 00:30:37.883 "data_offset": 2048, 00:30:37.883 "data_size": 63488 00:30:37.883 }, 00:30:37.883 { 00:30:37.883 "name": "BaseBdev4", 00:30:37.883 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:37.883 "is_configured": true, 00:30:37.883 "data_offset": 2048, 00:30:37.883 "data_size": 63488 00:30:37.883 } 00:30:37.883 ] 00:30:37.883 }' 00:30:37.883 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:37.883 11:12:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:38.521 11:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:38.521 11:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:30:38.779 [2024-07-25 11:12:45.670507] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:38.779 11:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=63488 00:30:38.779 11:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:38.779 11:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:39.037 11:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@634 -- # data_offset=2048 00:30:39.037 11:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@636 -- # '[' true = true ']' 00:30:39.037 11:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@638 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:39.038 11:12:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:39.038 [2024-07-25 11:12:46.130721] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:39.038 [2024-07-25 11:12:46.130812] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:30:39.038 [2024-07-25 11:12:46.133364] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:30:39.038 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:39.038 Zero copy mechanism will not be used. 00:30:39.038 Running I/O for 60 seconds... 00:30:39.296 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:39.296 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:39.296 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:39.296 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:39.296 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:39.296 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:39.296 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:39.296 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:39.296 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:39.296 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:39.296 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:39.296 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:39.555 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:39.555 "name": "raid_bdev1", 00:30:39.555 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:39.555 "strip_size_kb": 0, 00:30:39.555 "state": "online", 00:30:39.555 "raid_level": "raid1", 00:30:39.555 "superblock": true, 00:30:39.555 "num_base_bdevs": 4, 00:30:39.555 "num_base_bdevs_discovered": 3, 00:30:39.555 "num_base_bdevs_operational": 3, 00:30:39.555 "base_bdevs_list": [ 00:30:39.555 { 00:30:39.555 "name": null, 00:30:39.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.555 "is_configured": false, 00:30:39.555 "data_offset": 2048, 00:30:39.555 "data_size": 63488 00:30:39.555 }, 00:30:39.555 { 00:30:39.555 "name": "BaseBdev2", 00:30:39.555 "uuid": "72f98486-e53d-5648-855f-77b88918de01", 00:30:39.555 "is_configured": true, 00:30:39.555 "data_offset": 2048, 00:30:39.555 "data_size": 63488 00:30:39.555 }, 00:30:39.555 { 00:30:39.555 "name": "BaseBdev3", 00:30:39.555 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:39.555 "is_configured": true, 00:30:39.555 "data_offset": 2048, 00:30:39.555 "data_size": 63488 00:30:39.555 }, 00:30:39.555 { 00:30:39.555 "name": "BaseBdev4", 00:30:39.555 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:39.555 "is_configured": true, 00:30:39.555 "data_offset": 2048, 00:30:39.555 "data_size": 63488 00:30:39.555 } 00:30:39.555 ] 00:30:39.555 }' 00:30:39.555 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:39.555 11:12:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:40.124 11:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:40.124 [2024-07-25 11:12:47.223914] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:40.383 11:12:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:40.383 [2024-07-25 11:12:47.305996] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010b20 00:30:40.383 [2024-07-25 11:12:47.308478] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:40.383 [2024-07-25 11:12:47.420662] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:40.383 [2024-07-25 11:12:47.421947] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:40.641 [2024-07-25 11:12:47.669609] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:41.207 [2024-07-25 11:12:48.186951] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:41.207 [2024-07-25 11:12:48.187649] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:41.207 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:41.207 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:41.207 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:41.207 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:41.207 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:41.207 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.207 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.469 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:41.469 "name": "raid_bdev1", 00:30:41.469 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:41.469 "strip_size_kb": 0, 00:30:41.469 "state": "online", 00:30:41.469 "raid_level": "raid1", 00:30:41.469 "superblock": true, 00:30:41.469 "num_base_bdevs": 4, 00:30:41.469 "num_base_bdevs_discovered": 4, 00:30:41.469 "num_base_bdevs_operational": 4, 00:30:41.469 "process": { 00:30:41.469 "type": "rebuild", 00:30:41.469 "target": "spare", 00:30:41.469 "progress": { 00:30:41.469 "blocks": 12288, 00:30:41.469 "percent": 19 00:30:41.469 } 00:30:41.469 }, 00:30:41.469 "base_bdevs_list": [ 00:30:41.469 { 00:30:41.469 "name": "spare", 00:30:41.469 "uuid": "14bde621-3025-55c2-9f67-2ab149c25ae2", 00:30:41.469 "is_configured": true, 00:30:41.469 "data_offset": 2048, 00:30:41.469 "data_size": 63488 00:30:41.469 }, 00:30:41.469 { 00:30:41.469 "name": "BaseBdev2", 00:30:41.469 "uuid": "72f98486-e53d-5648-855f-77b88918de01", 00:30:41.469 "is_configured": true, 00:30:41.469 "data_offset": 2048, 00:30:41.469 "data_size": 63488 00:30:41.469 }, 00:30:41.469 { 00:30:41.469 "name": "BaseBdev3", 00:30:41.469 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:41.469 "is_configured": true, 00:30:41.469 "data_offset": 2048, 00:30:41.469 "data_size": 63488 00:30:41.469 }, 00:30:41.469 { 00:30:41.469 "name": "BaseBdev4", 00:30:41.469 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:41.469 "is_configured": true, 00:30:41.469 "data_offset": 2048, 00:30:41.469 "data_size": 63488 00:30:41.469 } 00:30:41.469 ] 00:30:41.469 }' 00:30:41.469 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:41.469 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:41.469 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:41.728 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:41.728 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:41.728 [2024-07-25 11:12:48.739455] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:41.728 [2024-07-25 11:12:48.824307] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:41.986 [2024-07-25 11:12:48.880449] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:41.986 [2024-07-25 11:12:48.889284] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:41.986 [2024-07-25 11:12:48.901416] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:41.986 [2024-07-25 11:12:48.901455] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:41.986 [2024-07-25 11:12:48.901474] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:41.986 [2024-07-25 11:12:48.939487] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000010a50 00:30:41.986 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:41.986 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:41.986 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:41.986 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:41.986 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:41.986 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:41.986 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:41.986 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:41.986 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:41.986 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:41.986 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.986 11:12:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.245 11:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:42.245 "name": "raid_bdev1", 00:30:42.245 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:42.245 "strip_size_kb": 0, 00:30:42.245 "state": "online", 00:30:42.245 "raid_level": "raid1", 00:30:42.245 "superblock": true, 00:30:42.245 "num_base_bdevs": 4, 00:30:42.245 "num_base_bdevs_discovered": 3, 00:30:42.245 "num_base_bdevs_operational": 3, 00:30:42.245 "base_bdevs_list": [ 00:30:42.245 { 00:30:42.245 "name": null, 00:30:42.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.245 "is_configured": false, 00:30:42.245 "data_offset": 2048, 00:30:42.245 "data_size": 63488 00:30:42.245 }, 00:30:42.245 { 00:30:42.245 "name": "BaseBdev2", 00:30:42.245 "uuid": "72f98486-e53d-5648-855f-77b88918de01", 00:30:42.245 "is_configured": true, 00:30:42.245 "data_offset": 2048, 00:30:42.245 "data_size": 63488 00:30:42.245 }, 00:30:42.245 { 00:30:42.245 "name": "BaseBdev3", 00:30:42.245 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:42.245 "is_configured": true, 00:30:42.245 "data_offset": 2048, 00:30:42.245 "data_size": 63488 00:30:42.245 }, 00:30:42.245 { 00:30:42.246 "name": "BaseBdev4", 00:30:42.246 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:42.246 "is_configured": true, 00:30:42.246 "data_offset": 2048, 00:30:42.246 "data_size": 63488 00:30:42.246 } 00:30:42.246 ] 00:30:42.246 }' 00:30:42.246 11:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:42.246 11:12:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:42.813 11:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:42.813 11:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:42.813 11:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:42.813 11:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:42.813 11:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:42.813 11:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.813 11:12:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.071 11:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:43.071 "name": "raid_bdev1", 00:30:43.071 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:43.071 "strip_size_kb": 0, 00:30:43.071 "state": "online", 00:30:43.071 "raid_level": "raid1", 00:30:43.071 "superblock": true, 00:30:43.071 "num_base_bdevs": 4, 00:30:43.071 "num_base_bdevs_discovered": 3, 00:30:43.071 "num_base_bdevs_operational": 3, 00:30:43.071 "base_bdevs_list": [ 00:30:43.071 { 00:30:43.071 "name": null, 00:30:43.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:43.071 "is_configured": false, 00:30:43.071 "data_offset": 2048, 00:30:43.071 "data_size": 63488 00:30:43.071 }, 00:30:43.071 { 00:30:43.071 "name": "BaseBdev2", 00:30:43.071 "uuid": "72f98486-e53d-5648-855f-77b88918de01", 00:30:43.071 "is_configured": true, 00:30:43.071 "data_offset": 2048, 00:30:43.071 "data_size": 63488 00:30:43.071 }, 00:30:43.071 { 00:30:43.071 "name": "BaseBdev3", 00:30:43.071 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:43.071 "is_configured": true, 00:30:43.071 "data_offset": 2048, 00:30:43.071 "data_size": 63488 00:30:43.071 }, 00:30:43.071 { 00:30:43.071 "name": "BaseBdev4", 00:30:43.071 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:43.071 "is_configured": true, 00:30:43.071 "data_offset": 2048, 00:30:43.071 "data_size": 63488 00:30:43.071 } 00:30:43.071 ] 00:30:43.071 }' 00:30:43.071 11:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:43.071 11:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:43.071 11:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:43.071 11:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:43.071 11:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:43.329 [2024-07-25 11:12:50.362852] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:43.329 11:12:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@678 -- # sleep 1 00:30:43.329 [2024-07-25 11:12:50.438049] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010bf0 00:30:43.329 [2024-07-25 11:12:50.440486] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:43.586 [2024-07-25 11:12:50.561439] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:43.586 [2024-07-25 11:12:50.562724] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:43.844 [2024-07-25 11:12:50.792319] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:43.844 [2024-07-25 11:12:50.792508] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:44.410 [2024-07-25 11:12:51.282159] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:44.410 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:44.410 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:44.410 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:44.410 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:44.410 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:44.410 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:44.410 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.668 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:44.668 "name": "raid_bdev1", 00:30:44.668 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:44.668 "strip_size_kb": 0, 00:30:44.668 "state": "online", 00:30:44.668 "raid_level": "raid1", 00:30:44.668 "superblock": true, 00:30:44.668 "num_base_bdevs": 4, 00:30:44.668 "num_base_bdevs_discovered": 4, 00:30:44.668 "num_base_bdevs_operational": 4, 00:30:44.668 "process": { 00:30:44.668 "type": "rebuild", 00:30:44.668 "target": "spare", 00:30:44.668 "progress": { 00:30:44.668 "blocks": 12288, 00:30:44.668 "percent": 19 00:30:44.668 } 00:30:44.668 }, 00:30:44.668 "base_bdevs_list": [ 00:30:44.668 { 00:30:44.668 "name": "spare", 00:30:44.668 "uuid": "14bde621-3025-55c2-9f67-2ab149c25ae2", 00:30:44.668 "is_configured": true, 00:30:44.668 "data_offset": 2048, 00:30:44.668 "data_size": 63488 00:30:44.668 }, 00:30:44.668 { 00:30:44.668 "name": "BaseBdev2", 00:30:44.668 "uuid": "72f98486-e53d-5648-855f-77b88918de01", 00:30:44.668 "is_configured": true, 00:30:44.668 "data_offset": 2048, 00:30:44.668 "data_size": 63488 00:30:44.668 }, 00:30:44.668 { 00:30:44.668 "name": "BaseBdev3", 00:30:44.668 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:44.668 "is_configured": true, 00:30:44.668 "data_offset": 2048, 00:30:44.668 "data_size": 63488 00:30:44.668 }, 00:30:44.668 { 00:30:44.668 "name": "BaseBdev4", 00:30:44.668 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:44.668 "is_configured": true, 00:30:44.668 "data_offset": 2048, 00:30:44.668 "data_size": 63488 00:30:44.668 } 00:30:44.668 ] 00:30:44.668 }' 00:30:44.668 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:44.668 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:44.668 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:44.668 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:44.668 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:30:44.668 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:30:44.668 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:30:44.668 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=4 00:30:44.668 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:30:44.668 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # '[' 4 -gt 2 ']' 00:30:44.668 11:12:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:44.668 [2024-07-25 11:12:51.770906] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:44.926 [2024-07-25 11:12:51.959957] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:45.189 [2024-07-25 11:12:52.095243] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010a50 00:30:45.189 [2024-07-25 11:12:52.095281] bdev_raid.c:1961:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000010bf0 00:30:45.189 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@713 -- # base_bdevs[1]= 00:30:45.189 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # (( num_base_bdevs_operational-- )) 00:30:45.189 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@717 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:45.189 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:45.190 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:45.190 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:45.190 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:45.190 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.190 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:45.190 [2024-07-25 11:12:52.242561] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:45.190 [2024-07-25 11:12:52.243044] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:45.449 "name": "raid_bdev1", 00:30:45.449 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:45.449 "strip_size_kb": 0, 00:30:45.449 "state": "online", 00:30:45.449 "raid_level": "raid1", 00:30:45.449 "superblock": true, 00:30:45.449 "num_base_bdevs": 4, 00:30:45.449 "num_base_bdevs_discovered": 3, 00:30:45.449 "num_base_bdevs_operational": 3, 00:30:45.449 "process": { 00:30:45.449 "type": "rebuild", 00:30:45.449 "target": "spare", 00:30:45.449 "progress": { 00:30:45.449 "blocks": 20480, 00:30:45.449 "percent": 32 00:30:45.449 } 00:30:45.449 }, 00:30:45.449 "base_bdevs_list": [ 00:30:45.449 { 00:30:45.449 "name": "spare", 00:30:45.449 "uuid": "14bde621-3025-55c2-9f67-2ab149c25ae2", 00:30:45.449 "is_configured": true, 00:30:45.449 "data_offset": 2048, 00:30:45.449 "data_size": 63488 00:30:45.449 }, 00:30:45.449 { 00:30:45.449 "name": null, 00:30:45.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.449 "is_configured": false, 00:30:45.449 "data_offset": 2048, 00:30:45.449 "data_size": 63488 00:30:45.449 }, 00:30:45.449 { 00:30:45.449 "name": "BaseBdev3", 00:30:45.449 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:45.449 "is_configured": true, 00:30:45.449 "data_offset": 2048, 00:30:45.449 "data_size": 63488 00:30:45.449 }, 00:30:45.449 { 00:30:45.449 "name": "BaseBdev4", 00:30:45.449 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:45.449 "is_configured": true, 00:30:45.449 "data_offset": 2048, 00:30:45.449 "data_size": 63488 00:30:45.449 } 00:30:45.449 ] 00:30:45.449 }' 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:45.449 [2024-07-25 11:12:52.456167] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # local timeout=1056 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.449 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:45.709 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:45.709 "name": "raid_bdev1", 00:30:45.709 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:45.709 "strip_size_kb": 0, 00:30:45.709 "state": "online", 00:30:45.709 "raid_level": "raid1", 00:30:45.709 "superblock": true, 00:30:45.709 "num_base_bdevs": 4, 00:30:45.709 "num_base_bdevs_discovered": 3, 00:30:45.709 "num_base_bdevs_operational": 3, 00:30:45.709 "process": { 00:30:45.709 "type": "rebuild", 00:30:45.709 "target": "spare", 00:30:45.709 "progress": { 00:30:45.709 "blocks": 24576, 00:30:45.709 "percent": 38 00:30:45.709 } 00:30:45.709 }, 00:30:45.709 "base_bdevs_list": [ 00:30:45.709 { 00:30:45.709 "name": "spare", 00:30:45.709 "uuid": "14bde621-3025-55c2-9f67-2ab149c25ae2", 00:30:45.709 "is_configured": true, 00:30:45.709 "data_offset": 2048, 00:30:45.709 "data_size": 63488 00:30:45.709 }, 00:30:45.709 { 00:30:45.709 "name": null, 00:30:45.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.709 "is_configured": false, 00:30:45.709 "data_offset": 2048, 00:30:45.709 "data_size": 63488 00:30:45.709 }, 00:30:45.709 { 00:30:45.709 "name": "BaseBdev3", 00:30:45.709 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:45.709 "is_configured": true, 00:30:45.709 "data_offset": 2048, 00:30:45.709 "data_size": 63488 00:30:45.709 }, 00:30:45.709 { 00:30:45.709 "name": "BaseBdev4", 00:30:45.709 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:45.709 "is_configured": true, 00:30:45.709 "data_offset": 2048, 00:30:45.709 "data_size": 63488 00:30:45.709 } 00:30:45.709 ] 00:30:45.709 }' 00:30:45.709 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:45.709 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:45.709 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:45.709 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:45.709 11:12:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:45.709 [2024-07-25 11:12:52.825254] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:30:45.968 [2024-07-25 11:12:52.936604] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:46.229 [2024-07-25 11:12:53.170884] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:30:46.799 [2024-07-25 11:12:53.639183] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:30:46.799 11:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:46.799 11:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:46.799 11:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:46.799 11:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:46.799 11:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:46.799 11:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:46.799 11:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.799 11:12:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:46.799 [2024-07-25 11:12:53.842993] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:30:47.058 11:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:47.058 "name": "raid_bdev1", 00:30:47.058 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:47.058 "strip_size_kb": 0, 00:30:47.058 "state": "online", 00:30:47.058 "raid_level": "raid1", 00:30:47.058 "superblock": true, 00:30:47.058 "num_base_bdevs": 4, 00:30:47.058 "num_base_bdevs_discovered": 3, 00:30:47.058 "num_base_bdevs_operational": 3, 00:30:47.058 "process": { 00:30:47.058 "type": "rebuild", 00:30:47.058 "target": "spare", 00:30:47.058 "progress": { 00:30:47.058 "blocks": 43008, 00:30:47.058 "percent": 67 00:30:47.058 } 00:30:47.058 }, 00:30:47.058 "base_bdevs_list": [ 00:30:47.058 { 00:30:47.058 "name": "spare", 00:30:47.058 "uuid": "14bde621-3025-55c2-9f67-2ab149c25ae2", 00:30:47.058 "is_configured": true, 00:30:47.058 "data_offset": 2048, 00:30:47.058 "data_size": 63488 00:30:47.058 }, 00:30:47.058 { 00:30:47.058 "name": null, 00:30:47.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.058 "is_configured": false, 00:30:47.058 "data_offset": 2048, 00:30:47.058 "data_size": 63488 00:30:47.059 }, 00:30:47.059 { 00:30:47.059 "name": "BaseBdev3", 00:30:47.059 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:47.059 "is_configured": true, 00:30:47.059 "data_offset": 2048, 00:30:47.059 "data_size": 63488 00:30:47.059 }, 00:30:47.059 { 00:30:47.059 "name": "BaseBdev4", 00:30:47.059 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:47.059 "is_configured": true, 00:30:47.059 "data_offset": 2048, 00:30:47.059 "data_size": 63488 00:30:47.059 } 00:30:47.059 ] 00:30:47.059 }' 00:30:47.059 11:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:47.059 11:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:47.059 11:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:47.059 11:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:47.059 11:12:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # sleep 1 00:30:47.317 [2024-07-25 11:12:54.404678] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:30:47.576 [2024-07-25 11:12:54.524802] bdev_raid.c: 852:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:30:48.143 [2024-07-25 11:12:55.094726] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:48.143 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:30:48.143 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:48.143 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:48.143 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:48.143 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:48.143 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:48.143 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.143 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.143 [2024-07-25 11:12:55.194964] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:48.143 [2024-07-25 11:12:55.197440] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:48.403 "name": "raid_bdev1", 00:30:48.403 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:48.403 "strip_size_kb": 0, 00:30:48.403 "state": "online", 00:30:48.403 "raid_level": "raid1", 00:30:48.403 "superblock": true, 00:30:48.403 "num_base_bdevs": 4, 00:30:48.403 "num_base_bdevs_discovered": 3, 00:30:48.403 "num_base_bdevs_operational": 3, 00:30:48.403 "base_bdevs_list": [ 00:30:48.403 { 00:30:48.403 "name": "spare", 00:30:48.403 "uuid": "14bde621-3025-55c2-9f67-2ab149c25ae2", 00:30:48.403 "is_configured": true, 00:30:48.403 "data_offset": 2048, 00:30:48.403 "data_size": 63488 00:30:48.403 }, 00:30:48.403 { 00:30:48.403 "name": null, 00:30:48.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.403 "is_configured": false, 00:30:48.403 "data_offset": 2048, 00:30:48.403 "data_size": 63488 00:30:48.403 }, 00:30:48.403 { 00:30:48.403 "name": "BaseBdev3", 00:30:48.403 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:48.403 "is_configured": true, 00:30:48.403 "data_offset": 2048, 00:30:48.403 "data_size": 63488 00:30:48.403 }, 00:30:48.403 { 00:30:48.403 "name": "BaseBdev4", 00:30:48.403 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:48.403 "is_configured": true, 00:30:48.403 "data_offset": 2048, 00:30:48.403 "data_size": 63488 00:30:48.403 } 00:30:48.403 ] 00:30:48.403 }' 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # break 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.403 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:48.662 "name": "raid_bdev1", 00:30:48.662 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:48.662 "strip_size_kb": 0, 00:30:48.662 "state": "online", 00:30:48.662 "raid_level": "raid1", 00:30:48.662 "superblock": true, 00:30:48.662 "num_base_bdevs": 4, 00:30:48.662 "num_base_bdevs_discovered": 3, 00:30:48.662 "num_base_bdevs_operational": 3, 00:30:48.662 "base_bdevs_list": [ 00:30:48.662 { 00:30:48.662 "name": "spare", 00:30:48.662 "uuid": "14bde621-3025-55c2-9f67-2ab149c25ae2", 00:30:48.662 "is_configured": true, 00:30:48.662 "data_offset": 2048, 00:30:48.662 "data_size": 63488 00:30:48.662 }, 00:30:48.662 { 00:30:48.662 "name": null, 00:30:48.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.662 "is_configured": false, 00:30:48.662 "data_offset": 2048, 00:30:48.662 "data_size": 63488 00:30:48.662 }, 00:30:48.662 { 00:30:48.662 "name": "BaseBdev3", 00:30:48.662 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:48.662 "is_configured": true, 00:30:48.662 "data_offset": 2048, 00:30:48.662 "data_size": 63488 00:30:48.662 }, 00:30:48.662 { 00:30:48.662 "name": "BaseBdev4", 00:30:48.662 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:48.662 "is_configured": true, 00:30:48.662 "data_offset": 2048, 00:30:48.662 "data_size": 63488 00:30:48.662 } 00:30:48.662 ] 00:30:48.662 }' 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:48.662 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:48.663 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.663 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.921 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:48.921 "name": "raid_bdev1", 00:30:48.921 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:48.921 "strip_size_kb": 0, 00:30:48.921 "state": "online", 00:30:48.921 "raid_level": "raid1", 00:30:48.922 "superblock": true, 00:30:48.922 "num_base_bdevs": 4, 00:30:48.922 "num_base_bdevs_discovered": 3, 00:30:48.922 "num_base_bdevs_operational": 3, 00:30:48.922 "base_bdevs_list": [ 00:30:48.922 { 00:30:48.922 "name": "spare", 00:30:48.922 "uuid": "14bde621-3025-55c2-9f67-2ab149c25ae2", 00:30:48.922 "is_configured": true, 00:30:48.922 "data_offset": 2048, 00:30:48.922 "data_size": 63488 00:30:48.922 }, 00:30:48.922 { 00:30:48.922 "name": null, 00:30:48.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.922 "is_configured": false, 00:30:48.922 "data_offset": 2048, 00:30:48.922 "data_size": 63488 00:30:48.922 }, 00:30:48.922 { 00:30:48.922 "name": "BaseBdev3", 00:30:48.922 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:48.922 "is_configured": true, 00:30:48.922 "data_offset": 2048, 00:30:48.922 "data_size": 63488 00:30:48.922 }, 00:30:48.922 { 00:30:48.922 "name": "BaseBdev4", 00:30:48.922 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:48.922 "is_configured": true, 00:30:48.922 "data_offset": 2048, 00:30:48.922 "data_size": 63488 00:30:48.922 } 00:30:48.922 ] 00:30:48.922 }' 00:30:48.922 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:48.922 11:12:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:49.490 11:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:49.748 [2024-07-25 11:12:56.777848] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:49.748 [2024-07-25 11:12:56.777896] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:50.007 00:30:50.007 Latency(us) 00:30:50.007 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.007 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:50.007 raid_bdev1 : 10.71 92.99 278.98 0.00 0.00 14542.18 335.87 122473.68 00:30:50.007 =================================================================================================================== 00:30:50.007 Total : 92.99 278.98 0.00 0.00 14542.18 335.87 122473.68 00:30:50.007 [2024-07-25 11:12:56.899636] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:50.007 [2024-07-25 11:12:56.899696] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:50.007 [2024-07-25 11:12:56.899812] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:50.007 [2024-07-25 11:12:56.899837] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:50.007 0 00:30:50.008 11:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.008 11:12:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # jq length 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@738 -- # '[' true = true ']' 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@740 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:50.267 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:30:50.526 /dev/nbd0 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:50.785 1+0 records in 00:30:50.785 1+0 records out 00:30:50.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293497 s, 14.0 MB/s 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z '' ']' 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # continue 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev3 ']' 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:50.785 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:51.046 /dev/nbd1 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:51.046 1+0 records in 00:30:51.046 1+0 records out 00:30:51.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280562 s, 14.6 MB/s 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:51.046 11:12:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:51.046 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:51.046 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:51.046 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:51.046 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:51.046 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:51.046 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:51.046 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@741 -- # for bdev in "${base_bdevs[@]:1}" 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' -z BaseBdev4 ']' 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:51.332 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:51.602 /dev/nbd1 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:51.602 1+0 records in 00:30:51.602 1+0 records out 00:30:51.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301044 s, 13.6 MB/s 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:51.602 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:51.861 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:51.861 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:51.861 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:51.861 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:51.861 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:51.861 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:51.861 11:12:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:52.120 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:52.379 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:52.379 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:52.379 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:52.379 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:52.379 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:52.379 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:52.379 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:30:52.379 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:52.379 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:30:52.379 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:52.946 11:12:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:52.946 [2024-07-25 11:13:00.041262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:52.947 [2024-07-25 11:13:00.041338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:52.947 [2024-07-25 11:13:00.041367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000044a80 00:30:52.947 [2024-07-25 11:13:00.041386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:52.947 [2024-07-25 11:13:00.044289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:52.947 [2024-07-25 11:13:00.044330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:52.947 [2024-07-25 11:13:00.044455] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:52.947 [2024-07-25 11:13:00.044533] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:52.947 [2024-07-25 11:13:00.044757] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:52.947 [2024-07-25 11:13:00.044871] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:52.947 spare 00:30:52.947 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:52.947 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:52.947 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:52.947 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:52.947 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:52.947 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:52.947 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:52.947 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:52.947 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:52.947 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:53.205 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.205 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.205 [2024-07-25 11:13:00.145210] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:30:53.205 [2024-07-25 11:13:00.145250] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:53.205 [2024-07-25 11:13:00.145622] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000041990 00:30:53.205 [2024-07-25 11:13:00.145910] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:30:53.205 [2024-07-25 11:13:00.145929] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:30:53.205 [2024-07-25 11:13:00.146156] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:53.205 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:53.205 "name": "raid_bdev1", 00:30:53.205 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:53.205 "strip_size_kb": 0, 00:30:53.205 "state": "online", 00:30:53.205 "raid_level": "raid1", 00:30:53.205 "superblock": true, 00:30:53.205 "num_base_bdevs": 4, 00:30:53.205 "num_base_bdevs_discovered": 3, 00:30:53.205 "num_base_bdevs_operational": 3, 00:30:53.205 "base_bdevs_list": [ 00:30:53.205 { 00:30:53.205 "name": "spare", 00:30:53.205 "uuid": "14bde621-3025-55c2-9f67-2ab149c25ae2", 00:30:53.205 "is_configured": true, 00:30:53.205 "data_offset": 2048, 00:30:53.205 "data_size": 63488 00:30:53.205 }, 00:30:53.205 { 00:30:53.205 "name": null, 00:30:53.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:53.205 "is_configured": false, 00:30:53.205 "data_offset": 2048, 00:30:53.205 "data_size": 63488 00:30:53.205 }, 00:30:53.206 { 00:30:53.206 "name": "BaseBdev3", 00:30:53.206 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:53.206 "is_configured": true, 00:30:53.206 "data_offset": 2048, 00:30:53.206 "data_size": 63488 00:30:53.206 }, 00:30:53.206 { 00:30:53.206 "name": "BaseBdev4", 00:30:53.206 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:53.206 "is_configured": true, 00:30:53.206 "data_offset": 2048, 00:30:53.206 "data_size": 63488 00:30:53.206 } 00:30:53.206 ] 00:30:53.206 }' 00:30:53.206 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:53.206 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:53.773 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:53.773 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:53.773 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:53.773 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:53.773 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:53.773 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:53.773 11:13:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.032 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:54.032 "name": "raid_bdev1", 00:30:54.032 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:54.032 "strip_size_kb": 0, 00:30:54.032 "state": "online", 00:30:54.032 "raid_level": "raid1", 00:30:54.032 "superblock": true, 00:30:54.032 "num_base_bdevs": 4, 00:30:54.032 "num_base_bdevs_discovered": 3, 00:30:54.032 "num_base_bdevs_operational": 3, 00:30:54.032 "base_bdevs_list": [ 00:30:54.032 { 00:30:54.032 "name": "spare", 00:30:54.032 "uuid": "14bde621-3025-55c2-9f67-2ab149c25ae2", 00:30:54.032 "is_configured": true, 00:30:54.032 "data_offset": 2048, 00:30:54.032 "data_size": 63488 00:30:54.032 }, 00:30:54.032 { 00:30:54.032 "name": null, 00:30:54.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.032 "is_configured": false, 00:30:54.032 "data_offset": 2048, 00:30:54.032 "data_size": 63488 00:30:54.032 }, 00:30:54.032 { 00:30:54.032 "name": "BaseBdev3", 00:30:54.032 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:54.032 "is_configured": true, 00:30:54.032 "data_offset": 2048, 00:30:54.032 "data_size": 63488 00:30:54.032 }, 00:30:54.032 { 00:30:54.032 "name": "BaseBdev4", 00:30:54.032 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:54.032 "is_configured": true, 00:30:54.032 "data_offset": 2048, 00:30:54.032 "data_size": 63488 00:30:54.032 } 00:30:54.032 ] 00:30:54.032 }' 00:30:54.032 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:54.291 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:54.291 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:54.291 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:54.291 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.291 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:54.550 [2024-07-25 11:13:01.634444] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.550 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:54.809 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:54.809 "name": "raid_bdev1", 00:30:54.809 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:54.809 "strip_size_kb": 0, 00:30:54.809 "state": "online", 00:30:54.809 "raid_level": "raid1", 00:30:54.809 "superblock": true, 00:30:54.809 "num_base_bdevs": 4, 00:30:54.809 "num_base_bdevs_discovered": 2, 00:30:54.809 "num_base_bdevs_operational": 2, 00:30:54.809 "base_bdevs_list": [ 00:30:54.809 { 00:30:54.809 "name": null, 00:30:54.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.809 "is_configured": false, 00:30:54.809 "data_offset": 2048, 00:30:54.809 "data_size": 63488 00:30:54.809 }, 00:30:54.809 { 00:30:54.809 "name": null, 00:30:54.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:54.809 "is_configured": false, 00:30:54.809 "data_offset": 2048, 00:30:54.809 "data_size": 63488 00:30:54.809 }, 00:30:54.809 { 00:30:54.809 "name": "BaseBdev3", 00:30:54.809 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:54.809 "is_configured": true, 00:30:54.809 "data_offset": 2048, 00:30:54.809 "data_size": 63488 00:30:54.809 }, 00:30:54.809 { 00:30:54.809 "name": "BaseBdev4", 00:30:54.809 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:54.809 "is_configured": true, 00:30:54.809 "data_offset": 2048, 00:30:54.809 "data_size": 63488 00:30:54.809 } 00:30:54.809 ] 00:30:54.809 }' 00:30:54.809 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:54.809 11:13:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:55.377 11:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:55.947 [2024-07-25 11:13:02.906112] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:55.947 [2024-07-25 11:13:02.906371] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:55.947 [2024-07-25 11:13:02.906402] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:55.947 [2024-07-25 11:13:02.906445] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:55.947 [2024-07-25 11:13:02.927917] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000041a60 00:30:55.947 [2024-07-25 11:13:02.930278] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:55.947 11:13:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # sleep 1 00:30:56.884 11:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:56.884 11:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:56.884 11:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:56.884 11:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:56.884 11:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:56.884 11:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.884 11:13:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.143 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:57.143 "name": "raid_bdev1", 00:30:57.143 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:57.143 "strip_size_kb": 0, 00:30:57.143 "state": "online", 00:30:57.143 "raid_level": "raid1", 00:30:57.143 "superblock": true, 00:30:57.143 "num_base_bdevs": 4, 00:30:57.143 "num_base_bdevs_discovered": 3, 00:30:57.143 "num_base_bdevs_operational": 3, 00:30:57.143 "process": { 00:30:57.143 "type": "rebuild", 00:30:57.143 "target": "spare", 00:30:57.143 "progress": { 00:30:57.143 "blocks": 24576, 00:30:57.143 "percent": 38 00:30:57.143 } 00:30:57.143 }, 00:30:57.143 "base_bdevs_list": [ 00:30:57.143 { 00:30:57.143 "name": "spare", 00:30:57.143 "uuid": "14bde621-3025-55c2-9f67-2ab149c25ae2", 00:30:57.143 "is_configured": true, 00:30:57.143 "data_offset": 2048, 00:30:57.143 "data_size": 63488 00:30:57.143 }, 00:30:57.143 { 00:30:57.143 "name": null, 00:30:57.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:57.143 "is_configured": false, 00:30:57.143 "data_offset": 2048, 00:30:57.143 "data_size": 63488 00:30:57.143 }, 00:30:57.143 { 00:30:57.143 "name": "BaseBdev3", 00:30:57.143 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:57.143 "is_configured": true, 00:30:57.143 "data_offset": 2048, 00:30:57.143 "data_size": 63488 00:30:57.143 }, 00:30:57.143 { 00:30:57.143 "name": "BaseBdev4", 00:30:57.143 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:57.143 "is_configured": true, 00:30:57.143 "data_offset": 2048, 00:30:57.143 "data_size": 63488 00:30:57.143 } 00:30:57.143 ] 00:30:57.143 }' 00:30:57.143 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:57.143 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:57.143 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:57.403 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:57.403 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:57.668 [2024-07-25 11:13:04.757688] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:57.930 [2024-07-25 11:13:04.845865] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:57.930 [2024-07-25 11:13:04.845934] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:57.930 [2024-07-25 11:13:04.845957] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:57.930 [2024-07-25 11:13:04.845971] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:57.930 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:57.930 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:57.930 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:57.930 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:57.930 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:57.930 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:57.930 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:57.930 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:57.930 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:57.930 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:57.930 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.930 11:13:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.189 11:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:58.189 "name": "raid_bdev1", 00:30:58.189 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:30:58.189 "strip_size_kb": 0, 00:30:58.189 "state": "online", 00:30:58.189 "raid_level": "raid1", 00:30:58.189 "superblock": true, 00:30:58.189 "num_base_bdevs": 4, 00:30:58.189 "num_base_bdevs_discovered": 2, 00:30:58.189 "num_base_bdevs_operational": 2, 00:30:58.189 "base_bdevs_list": [ 00:30:58.189 { 00:30:58.189 "name": null, 00:30:58.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.189 "is_configured": false, 00:30:58.189 "data_offset": 2048, 00:30:58.189 "data_size": 63488 00:30:58.189 }, 00:30:58.189 { 00:30:58.189 "name": null, 00:30:58.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:58.189 "is_configured": false, 00:30:58.189 "data_offset": 2048, 00:30:58.189 "data_size": 63488 00:30:58.189 }, 00:30:58.189 { 00:30:58.189 "name": "BaseBdev3", 00:30:58.189 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:30:58.189 "is_configured": true, 00:30:58.189 "data_offset": 2048, 00:30:58.189 "data_size": 63488 00:30:58.189 }, 00:30:58.189 { 00:30:58.189 "name": "BaseBdev4", 00:30:58.189 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:30:58.189 "is_configured": true, 00:30:58.189 "data_offset": 2048, 00:30:58.189 "data_size": 63488 00:30:58.189 } 00:30:58.189 ] 00:30:58.189 }' 00:30:58.189 11:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:58.189 11:13:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:58.758 11:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:59.018 [2024-07-25 11:13:05.897582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:59.018 [2024-07-25 11:13:05.897661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:59.018 [2024-07-25 11:13:05.897693] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045380 00:30:59.018 [2024-07-25 11:13:05.897712] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:59.018 [2024-07-25 11:13:05.898361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:59.018 [2024-07-25 11:13:05.898394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:59.018 [2024-07-25 11:13:05.898521] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:59.018 [2024-07-25 11:13:05.898543] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:59.018 [2024-07-25 11:13:05.898560] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:59.018 [2024-07-25 11:13:05.898591] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:59.018 [2024-07-25 11:13:05.920410] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000041b30 00:30:59.018 spare 00:30:59.018 [2024-07-25 11:13:05.922766] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:59.018 11:13:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # sleep 1 00:30:59.956 11:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:59.956 11:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:59.956 11:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:59.956 11:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:59.956 11:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:59.956 11:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.956 11:13:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.215 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:00.215 "name": "raid_bdev1", 00:31:00.215 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:31:00.215 "strip_size_kb": 0, 00:31:00.215 "state": "online", 00:31:00.215 "raid_level": "raid1", 00:31:00.215 "superblock": true, 00:31:00.215 "num_base_bdevs": 4, 00:31:00.215 "num_base_bdevs_discovered": 3, 00:31:00.215 "num_base_bdevs_operational": 3, 00:31:00.215 "process": { 00:31:00.215 "type": "rebuild", 00:31:00.215 "target": "spare", 00:31:00.215 "progress": { 00:31:00.215 "blocks": 22528, 00:31:00.215 "percent": 35 00:31:00.215 } 00:31:00.215 }, 00:31:00.215 "base_bdevs_list": [ 00:31:00.215 { 00:31:00.215 "name": "spare", 00:31:00.215 "uuid": "14bde621-3025-55c2-9f67-2ab149c25ae2", 00:31:00.215 "is_configured": true, 00:31:00.215 "data_offset": 2048, 00:31:00.215 "data_size": 63488 00:31:00.215 }, 00:31:00.215 { 00:31:00.215 "name": null, 00:31:00.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.215 "is_configured": false, 00:31:00.215 "data_offset": 2048, 00:31:00.215 "data_size": 63488 00:31:00.215 }, 00:31:00.215 { 00:31:00.215 "name": "BaseBdev3", 00:31:00.215 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:31:00.215 "is_configured": true, 00:31:00.215 "data_offset": 2048, 00:31:00.216 "data_size": 63488 00:31:00.216 }, 00:31:00.216 { 00:31:00.216 "name": "BaseBdev4", 00:31:00.216 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:31:00.216 "is_configured": true, 00:31:00.216 "data_offset": 2048, 00:31:00.216 "data_size": 63488 00:31:00.216 } 00:31:00.216 ] 00:31:00.216 }' 00:31:00.216 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:00.216 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:00.216 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:00.216 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:00.216 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:00.475 [2024-07-25 11:13:07.404255] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:00.475 [2024-07-25 11:13:07.435108] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:00.475 [2024-07-25 11:13:07.435182] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:00.475 [2024-07-25 11:13:07.435208] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:00.475 [2024-07-25 11:13:07.435220] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:00.475 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:00.475 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:00.475 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:00.475 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:00.475 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:00.475 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:00.475 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:00.475 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:00.475 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:00.475 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:00.475 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:00.475 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.734 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:00.734 "name": "raid_bdev1", 00:31:00.734 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:31:00.734 "strip_size_kb": 0, 00:31:00.734 "state": "online", 00:31:00.734 "raid_level": "raid1", 00:31:00.734 "superblock": true, 00:31:00.734 "num_base_bdevs": 4, 00:31:00.734 "num_base_bdevs_discovered": 2, 00:31:00.734 "num_base_bdevs_operational": 2, 00:31:00.734 "base_bdevs_list": [ 00:31:00.734 { 00:31:00.734 "name": null, 00:31:00.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.734 "is_configured": false, 00:31:00.734 "data_offset": 2048, 00:31:00.734 "data_size": 63488 00:31:00.734 }, 00:31:00.734 { 00:31:00.734 "name": null, 00:31:00.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.734 "is_configured": false, 00:31:00.734 "data_offset": 2048, 00:31:00.734 "data_size": 63488 00:31:00.734 }, 00:31:00.734 { 00:31:00.734 "name": "BaseBdev3", 00:31:00.734 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:31:00.734 "is_configured": true, 00:31:00.734 "data_offset": 2048, 00:31:00.734 "data_size": 63488 00:31:00.734 }, 00:31:00.734 { 00:31:00.734 "name": "BaseBdev4", 00:31:00.734 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:31:00.734 "is_configured": true, 00:31:00.734 "data_offset": 2048, 00:31:00.734 "data_size": 63488 00:31:00.734 } 00:31:00.734 ] 00:31:00.734 }' 00:31:00.734 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:00.734 11:13:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:01.302 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:01.302 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:01.302 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:01.302 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:01.302 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:01.302 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.302 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.562 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:01.562 "name": "raid_bdev1", 00:31:01.562 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:31:01.562 "strip_size_kb": 0, 00:31:01.562 "state": "online", 00:31:01.562 "raid_level": "raid1", 00:31:01.562 "superblock": true, 00:31:01.562 "num_base_bdevs": 4, 00:31:01.562 "num_base_bdevs_discovered": 2, 00:31:01.562 "num_base_bdevs_operational": 2, 00:31:01.562 "base_bdevs_list": [ 00:31:01.562 { 00:31:01.562 "name": null, 00:31:01.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.562 "is_configured": false, 00:31:01.562 "data_offset": 2048, 00:31:01.562 "data_size": 63488 00:31:01.562 }, 00:31:01.562 { 00:31:01.562 "name": null, 00:31:01.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.562 "is_configured": false, 00:31:01.562 "data_offset": 2048, 00:31:01.562 "data_size": 63488 00:31:01.562 }, 00:31:01.562 { 00:31:01.562 "name": "BaseBdev3", 00:31:01.562 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:31:01.562 "is_configured": true, 00:31:01.562 "data_offset": 2048, 00:31:01.562 "data_size": 63488 00:31:01.562 }, 00:31:01.562 { 00:31:01.562 "name": "BaseBdev4", 00:31:01.562 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:31:01.562 "is_configured": true, 00:31:01.562 "data_offset": 2048, 00:31:01.562 "data_size": 63488 00:31:01.562 } 00:31:01.562 ] 00:31:01.562 }' 00:31:01.562 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:01.562 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:01.562 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:01.562 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:01.562 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:01.821 11:13:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:02.080 [2024-07-25 11:13:09.012582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:02.080 [2024-07-25 11:13:09.012649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:02.080 [2024-07-25 11:13:09.012679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000045980 00:31:02.080 [2024-07-25 11:13:09.012694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:02.080 [2024-07-25 11:13:09.013280] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:02.080 [2024-07-25 11:13:09.013307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:02.080 [2024-07-25 11:13:09.013407] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:02.080 [2024-07-25 11:13:09.013427] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:02.080 [2024-07-25 11:13:09.013446] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:02.080 BaseBdev1 00:31:02.080 11:13:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@789 -- # sleep 1 00:31:03.018 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:03.018 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:03.018 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:03.018 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:03.018 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:03.018 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:03.018 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:03.018 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:03.018 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:03.018 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:03.018 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.018 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.277 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:03.277 "name": "raid_bdev1", 00:31:03.277 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:31:03.277 "strip_size_kb": 0, 00:31:03.277 "state": "online", 00:31:03.277 "raid_level": "raid1", 00:31:03.277 "superblock": true, 00:31:03.277 "num_base_bdevs": 4, 00:31:03.277 "num_base_bdevs_discovered": 2, 00:31:03.277 "num_base_bdevs_operational": 2, 00:31:03.277 "base_bdevs_list": [ 00:31:03.277 { 00:31:03.277 "name": null, 00:31:03.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.277 "is_configured": false, 00:31:03.277 "data_offset": 2048, 00:31:03.277 "data_size": 63488 00:31:03.277 }, 00:31:03.277 { 00:31:03.277 "name": null, 00:31:03.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.277 "is_configured": false, 00:31:03.277 "data_offset": 2048, 00:31:03.277 "data_size": 63488 00:31:03.277 }, 00:31:03.277 { 00:31:03.277 "name": "BaseBdev3", 00:31:03.277 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:31:03.277 "is_configured": true, 00:31:03.277 "data_offset": 2048, 00:31:03.277 "data_size": 63488 00:31:03.277 }, 00:31:03.277 { 00:31:03.277 "name": "BaseBdev4", 00:31:03.277 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:31:03.277 "is_configured": true, 00:31:03.277 "data_offset": 2048, 00:31:03.277 "data_size": 63488 00:31:03.277 } 00:31:03.277 ] 00:31:03.277 }' 00:31:03.277 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:03.277 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:03.846 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:03.846 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:03.846 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:03.846 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:03.846 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:03.846 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.846 11:13:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:04.106 "name": "raid_bdev1", 00:31:04.106 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:31:04.106 "strip_size_kb": 0, 00:31:04.106 "state": "online", 00:31:04.106 "raid_level": "raid1", 00:31:04.106 "superblock": true, 00:31:04.106 "num_base_bdevs": 4, 00:31:04.106 "num_base_bdevs_discovered": 2, 00:31:04.106 "num_base_bdevs_operational": 2, 00:31:04.106 "base_bdevs_list": [ 00:31:04.106 { 00:31:04.106 "name": null, 00:31:04.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.106 "is_configured": false, 00:31:04.106 "data_offset": 2048, 00:31:04.106 "data_size": 63488 00:31:04.106 }, 00:31:04.106 { 00:31:04.106 "name": null, 00:31:04.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:04.106 "is_configured": false, 00:31:04.106 "data_offset": 2048, 00:31:04.106 "data_size": 63488 00:31:04.106 }, 00:31:04.106 { 00:31:04.106 "name": "BaseBdev3", 00:31:04.106 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:31:04.106 "is_configured": true, 00:31:04.106 "data_offset": 2048, 00:31:04.106 "data_size": 63488 00:31:04.106 }, 00:31:04.106 { 00:31:04.106 "name": "BaseBdev4", 00:31:04.106 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:31:04.106 "is_configured": true, 00:31:04.106 "data_offset": 2048, 00:31:04.106 "data_size": 63488 00:31:04.106 } 00:31:04.106 ] 00:31:04.106 }' 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:31:04.106 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:04.403 [2024-07-25 11:13:11.363554] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:04.403 [2024-07-25 11:13:11.363738] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:04.403 [2024-07-25 11:13:11.363760] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:04.403 request: 00:31:04.403 { 00:31:04.403 "base_bdev": "BaseBdev1", 00:31:04.403 "raid_bdev": "raid_bdev1", 00:31:04.403 "method": "bdev_raid_add_base_bdev", 00:31:04.403 "req_id": 1 00:31:04.403 } 00:31:04.403 Got JSON-RPC error response 00:31:04.403 response: 00:31:04.403 { 00:31:04.403 "code": -22, 00:31:04.403 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:04.403 } 00:31:04.403 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:31:04.403 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:04.403 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:04.403 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:04.403 11:13:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@793 -- # sleep 1 00:31:05.347 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:05.347 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:05.347 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:05.347 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:05.347 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:05.347 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:05.347 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:05.347 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:05.347 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:05.347 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:05.347 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:05.347 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.606 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:05.606 "name": "raid_bdev1", 00:31:05.606 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:31:05.606 "strip_size_kb": 0, 00:31:05.606 "state": "online", 00:31:05.606 "raid_level": "raid1", 00:31:05.606 "superblock": true, 00:31:05.606 "num_base_bdevs": 4, 00:31:05.606 "num_base_bdevs_discovered": 2, 00:31:05.606 "num_base_bdevs_operational": 2, 00:31:05.606 "base_bdevs_list": [ 00:31:05.606 { 00:31:05.606 "name": null, 00:31:05.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.606 "is_configured": false, 00:31:05.606 "data_offset": 2048, 00:31:05.606 "data_size": 63488 00:31:05.606 }, 00:31:05.606 { 00:31:05.606 "name": null, 00:31:05.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:05.606 "is_configured": false, 00:31:05.606 "data_offset": 2048, 00:31:05.606 "data_size": 63488 00:31:05.606 }, 00:31:05.606 { 00:31:05.606 "name": "BaseBdev3", 00:31:05.606 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:31:05.606 "is_configured": true, 00:31:05.606 "data_offset": 2048, 00:31:05.606 "data_size": 63488 00:31:05.606 }, 00:31:05.606 { 00:31:05.606 "name": "BaseBdev4", 00:31:05.606 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:31:05.606 "is_configured": true, 00:31:05.606 "data_offset": 2048, 00:31:05.606 "data_size": 63488 00:31:05.606 } 00:31:05.606 ] 00:31:05.606 }' 00:31:05.606 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:05.606 11:13:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:06.175 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:06.175 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:06.175 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:06.175 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:06.175 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:06.175 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.175 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.434 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:06.434 "name": "raid_bdev1", 00:31:06.434 "uuid": "6ee14795-7f9a-4804-8b19-54aa920d7a67", 00:31:06.434 "strip_size_kb": 0, 00:31:06.434 "state": "online", 00:31:06.434 "raid_level": "raid1", 00:31:06.434 "superblock": true, 00:31:06.434 "num_base_bdevs": 4, 00:31:06.434 "num_base_bdevs_discovered": 2, 00:31:06.434 "num_base_bdevs_operational": 2, 00:31:06.434 "base_bdevs_list": [ 00:31:06.434 { 00:31:06.434 "name": null, 00:31:06.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.434 "is_configured": false, 00:31:06.434 "data_offset": 2048, 00:31:06.434 "data_size": 63488 00:31:06.434 }, 00:31:06.434 { 00:31:06.434 "name": null, 00:31:06.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.434 "is_configured": false, 00:31:06.434 "data_offset": 2048, 00:31:06.434 "data_size": 63488 00:31:06.434 }, 00:31:06.434 { 00:31:06.434 "name": "BaseBdev3", 00:31:06.434 "uuid": "c67cb6e7-66fa-557d-9a11-00a5536cad0b", 00:31:06.434 "is_configured": true, 00:31:06.435 "data_offset": 2048, 00:31:06.435 "data_size": 63488 00:31:06.435 }, 00:31:06.435 { 00:31:06.435 "name": "BaseBdev4", 00:31:06.435 "uuid": "122e03dd-e7ec-55a6-a387-b142759c9a91", 00:31:06.435 "is_configured": true, 00:31:06.435 "data_offset": 2048, 00:31:06.435 "data_size": 63488 00:31:06.435 } 00:31:06.435 ] 00:31:06.435 }' 00:31:06.435 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:06.435 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:06.435 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:06.435 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:06.435 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@798 -- # killprocess 3730880 00:31:06.435 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 3730880 ']' 00:31:06.435 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 3730880 00:31:06.435 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:31:06.435 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:06.435 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3730880 00:31:06.694 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:06.694 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:06.694 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3730880' 00:31:06.694 killing process with pid 3730880 00:31:06.694 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 3730880 00:31:06.694 Received shutdown signal, test time was about 27.375081 seconds 00:31:06.694 00:31:06.694 Latency(us) 00:31:06.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.694 =================================================================================================================== 00:31:06.694 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:06.694 [2024-07-25 11:13:13.575268] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:06.694 11:13:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 3730880 00:31:06.694 [2024-07-25 11:13:13.575415] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:06.694 [2024-07-25 11:13:13.575502] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:06.694 [2024-07-25 11:13:13.575519] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:31:07.262 [2024-07-25 11:13:14.081924] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:09.169 11:13:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@800 -- # return 0 00:31:09.169 00:31:09.169 real 0m35.235s 00:31:09.169 user 0m53.936s 00:31:09.169 sys 0m5.222s 00:31:09.169 11:13:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:09.169 11:13:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:09.169 ************************************ 00:31:09.169 END TEST raid_rebuild_test_sb_io 00:31:09.169 ************************************ 00:31:09.169 11:13:15 bdev_raid -- bdev/bdev_raid.sh@964 -- # '[' n == y ']' 00:31:09.169 11:13:15 bdev_raid -- bdev/bdev_raid.sh@976 -- # base_blocklen=4096 00:31:09.169 11:13:15 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:31:09.169 11:13:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:09.169 11:13:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:09.169 11:13:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:09.169 ************************************ 00:31:09.169 START TEST raid_state_function_test_sb_4k 00:31:09.169 ************************************ 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=3737164 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3737164' 00:31:09.169 Process raid pid: 3737164 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 3737164 /var/tmp/spdk-raid.sock 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 3737164 ']' 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:09.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:09.169 11:13:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:09.169 [2024-07-25 11:13:16.136207] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:09.169 [2024-07-25 11:13:16.136322] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:09.169 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.169 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:09.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.170 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:09.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.170 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:09.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.170 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:09.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.170 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:09.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.170 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:09.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.170 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:09.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.170 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:09.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.170 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:09.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.170 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:09.170 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:09.170 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:09.429 [2024-07-25 11:13:16.364744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.688 [2024-07-25 11:13:16.632470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.947 [2024-07-25 11:13:16.969398] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:09.947 [2024-07-25 11:13:16.969443] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:10.205 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:10.205 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:31:10.205 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:10.465 [2024-07-25 11:13:17.359630] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:10.465 [2024-07-25 11:13:17.359683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:10.465 [2024-07-25 11:13:17.359698] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:10.465 [2024-07-25 11:13:17.359714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:10.465 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:10.465 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:10.465 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:10.465 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:10.465 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:10.465 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:10.465 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:10.465 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:10.465 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:10.465 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:10.465 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.465 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:10.724 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:10.724 "name": "Existed_Raid", 00:31:10.724 "uuid": "1a98095f-a0ff-4f9f-b91f-84ec4a659950", 00:31:10.724 "strip_size_kb": 0, 00:31:10.724 "state": "configuring", 00:31:10.724 "raid_level": "raid1", 00:31:10.724 "superblock": true, 00:31:10.724 "num_base_bdevs": 2, 00:31:10.724 "num_base_bdevs_discovered": 0, 00:31:10.724 "num_base_bdevs_operational": 2, 00:31:10.724 "base_bdevs_list": [ 00:31:10.724 { 00:31:10.724 "name": "BaseBdev1", 00:31:10.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.724 "is_configured": false, 00:31:10.724 "data_offset": 0, 00:31:10.724 "data_size": 0 00:31:10.724 }, 00:31:10.724 { 00:31:10.724 "name": "BaseBdev2", 00:31:10.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.724 "is_configured": false, 00:31:10.724 "data_offset": 0, 00:31:10.724 "data_size": 0 00:31:10.724 } 00:31:10.724 ] 00:31:10.724 }' 00:31:10.724 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:10.724 11:13:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:11.291 11:13:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:11.291 [2024-07-25 11:13:18.382231] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:11.291 [2024-07-25 11:13:18.382271] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:31:11.291 11:13:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:11.549 [2024-07-25 11:13:18.610884] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:11.549 [2024-07-25 11:13:18.610927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:11.549 [2024-07-25 11:13:18.610940] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:11.549 [2024-07-25 11:13:18.610956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:11.549 11:13:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:31:11.808 [2024-07-25 11:13:18.897369] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:11.808 BaseBdev1 00:31:11.808 11:13:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:31:11.808 11:13:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:31:11.808 11:13:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:11.808 11:13:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:31:11.808 11:13:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:11.808 11:13:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:11.808 11:13:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:12.067 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:12.326 [ 00:31:12.326 { 00:31:12.326 "name": "BaseBdev1", 00:31:12.326 "aliases": [ 00:31:12.326 "c1d48545-3b50-4ea2-bd35-6742d1905ad8" 00:31:12.326 ], 00:31:12.326 "product_name": "Malloc disk", 00:31:12.326 "block_size": 4096, 00:31:12.326 "num_blocks": 8192, 00:31:12.326 "uuid": "c1d48545-3b50-4ea2-bd35-6742d1905ad8", 00:31:12.326 "assigned_rate_limits": { 00:31:12.326 "rw_ios_per_sec": 0, 00:31:12.326 "rw_mbytes_per_sec": 0, 00:31:12.326 "r_mbytes_per_sec": 0, 00:31:12.326 "w_mbytes_per_sec": 0 00:31:12.326 }, 00:31:12.326 "claimed": true, 00:31:12.326 "claim_type": "exclusive_write", 00:31:12.326 "zoned": false, 00:31:12.326 "supported_io_types": { 00:31:12.326 "read": true, 00:31:12.326 "write": true, 00:31:12.326 "unmap": true, 00:31:12.326 "flush": true, 00:31:12.326 "reset": true, 00:31:12.326 "nvme_admin": false, 00:31:12.326 "nvme_io": false, 00:31:12.326 "nvme_io_md": false, 00:31:12.326 "write_zeroes": true, 00:31:12.326 "zcopy": true, 00:31:12.326 "get_zone_info": false, 00:31:12.326 "zone_management": false, 00:31:12.326 "zone_append": false, 00:31:12.326 "compare": false, 00:31:12.326 "compare_and_write": false, 00:31:12.326 "abort": true, 00:31:12.326 "seek_hole": false, 00:31:12.326 "seek_data": false, 00:31:12.326 "copy": true, 00:31:12.326 "nvme_iov_md": false 00:31:12.326 }, 00:31:12.326 "memory_domains": [ 00:31:12.326 { 00:31:12.326 "dma_device_id": "system", 00:31:12.326 "dma_device_type": 1 00:31:12.326 }, 00:31:12.326 { 00:31:12.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:12.326 "dma_device_type": 2 00:31:12.326 } 00:31:12.326 ], 00:31:12.326 "driver_specific": {} 00:31:12.326 } 00:31:12.326 ] 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:12.326 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:12.586 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:12.586 "name": "Existed_Raid", 00:31:12.586 "uuid": "7f07c94f-f019-4a29-a019-85fd295945e4", 00:31:12.586 "strip_size_kb": 0, 00:31:12.586 "state": "configuring", 00:31:12.586 "raid_level": "raid1", 00:31:12.586 "superblock": true, 00:31:12.586 "num_base_bdevs": 2, 00:31:12.586 "num_base_bdevs_discovered": 1, 00:31:12.586 "num_base_bdevs_operational": 2, 00:31:12.586 "base_bdevs_list": [ 00:31:12.586 { 00:31:12.586 "name": "BaseBdev1", 00:31:12.586 "uuid": "c1d48545-3b50-4ea2-bd35-6742d1905ad8", 00:31:12.586 "is_configured": true, 00:31:12.586 "data_offset": 256, 00:31:12.586 "data_size": 7936 00:31:12.586 }, 00:31:12.586 { 00:31:12.586 "name": "BaseBdev2", 00:31:12.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:12.586 "is_configured": false, 00:31:12.586 "data_offset": 0, 00:31:12.586 "data_size": 0 00:31:12.586 } 00:31:12.586 ] 00:31:12.586 }' 00:31:12.586 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:12.586 11:13:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:13.154 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:13.413 [2024-07-25 11:13:20.385476] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:13.413 [2024-07-25 11:13:20.385529] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:31:13.413 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:13.672 [2024-07-25 11:13:20.614177] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:13.672 [2024-07-25 11:13:20.616478] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:13.672 [2024-07-25 11:13:20.616522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:13.672 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:13.932 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.932 "name": "Existed_Raid", 00:31:13.932 "uuid": "7e9f548e-5a7b-442b-b533-445521bd860a", 00:31:13.932 "strip_size_kb": 0, 00:31:13.932 "state": "configuring", 00:31:13.932 "raid_level": "raid1", 00:31:13.932 "superblock": true, 00:31:13.932 "num_base_bdevs": 2, 00:31:13.932 "num_base_bdevs_discovered": 1, 00:31:13.932 "num_base_bdevs_operational": 2, 00:31:13.932 "base_bdevs_list": [ 00:31:13.932 { 00:31:13.932 "name": "BaseBdev1", 00:31:13.932 "uuid": "c1d48545-3b50-4ea2-bd35-6742d1905ad8", 00:31:13.932 "is_configured": true, 00:31:13.932 "data_offset": 256, 00:31:13.932 "data_size": 7936 00:31:13.932 }, 00:31:13.932 { 00:31:13.932 "name": "BaseBdev2", 00:31:13.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:13.932 "is_configured": false, 00:31:13.932 "data_offset": 0, 00:31:13.932 "data_size": 0 00:31:13.932 } 00:31:13.932 ] 00:31:13.932 }' 00:31:13.932 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.932 11:13:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:14.499 11:13:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:31:14.758 [2024-07-25 11:13:21.649742] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:14.758 [2024-07-25 11:13:21.650013] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:14.758 [2024-07-25 11:13:21.650037] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:14.758 [2024-07-25 11:13:21.650375] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:31:14.758 [2024-07-25 11:13:21.650596] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:14.758 [2024-07-25 11:13:21.650615] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:31:14.758 [2024-07-25 11:13:21.650797] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:14.758 BaseBdev2 00:31:14.758 11:13:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:31:14.758 11:13:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:31:14.758 11:13:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:31:14.758 11:13:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:31:14.758 11:13:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:31:14.758 11:13:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:31:14.758 11:13:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:15.017 11:13:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:15.017 [ 00:31:15.017 { 00:31:15.017 "name": "BaseBdev2", 00:31:15.017 "aliases": [ 00:31:15.017 "120ae783-d288-49c9-adf5-e091946ef009" 00:31:15.017 ], 00:31:15.017 "product_name": "Malloc disk", 00:31:15.017 "block_size": 4096, 00:31:15.017 "num_blocks": 8192, 00:31:15.017 "uuid": "120ae783-d288-49c9-adf5-e091946ef009", 00:31:15.017 "assigned_rate_limits": { 00:31:15.017 "rw_ios_per_sec": 0, 00:31:15.017 "rw_mbytes_per_sec": 0, 00:31:15.017 "r_mbytes_per_sec": 0, 00:31:15.017 "w_mbytes_per_sec": 0 00:31:15.017 }, 00:31:15.017 "claimed": true, 00:31:15.017 "claim_type": "exclusive_write", 00:31:15.017 "zoned": false, 00:31:15.017 "supported_io_types": { 00:31:15.017 "read": true, 00:31:15.017 "write": true, 00:31:15.017 "unmap": true, 00:31:15.017 "flush": true, 00:31:15.017 "reset": true, 00:31:15.017 "nvme_admin": false, 00:31:15.017 "nvme_io": false, 00:31:15.017 "nvme_io_md": false, 00:31:15.017 "write_zeroes": true, 00:31:15.017 "zcopy": true, 00:31:15.017 "get_zone_info": false, 00:31:15.017 "zone_management": false, 00:31:15.017 "zone_append": false, 00:31:15.017 "compare": false, 00:31:15.017 "compare_and_write": false, 00:31:15.017 "abort": true, 00:31:15.017 "seek_hole": false, 00:31:15.017 "seek_data": false, 00:31:15.017 "copy": true, 00:31:15.017 "nvme_iov_md": false 00:31:15.017 }, 00:31:15.017 "memory_domains": [ 00:31:15.017 { 00:31:15.017 "dma_device_id": "system", 00:31:15.017 "dma_device_type": 1 00:31:15.017 }, 00:31:15.017 { 00:31:15.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.017 "dma_device_type": 2 00:31:15.017 } 00:31:15.017 ], 00:31:15.017 "driver_specific": {} 00:31:15.017 } 00:31:15.017 ] 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:15.017 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.275 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:15.275 "name": "Existed_Raid", 00:31:15.275 "uuid": "7e9f548e-5a7b-442b-b533-445521bd860a", 00:31:15.275 "strip_size_kb": 0, 00:31:15.275 "state": "online", 00:31:15.275 "raid_level": "raid1", 00:31:15.275 "superblock": true, 00:31:15.275 "num_base_bdevs": 2, 00:31:15.275 "num_base_bdevs_discovered": 2, 00:31:15.275 "num_base_bdevs_operational": 2, 00:31:15.275 "base_bdevs_list": [ 00:31:15.275 { 00:31:15.275 "name": "BaseBdev1", 00:31:15.275 "uuid": "c1d48545-3b50-4ea2-bd35-6742d1905ad8", 00:31:15.275 "is_configured": true, 00:31:15.275 "data_offset": 256, 00:31:15.275 "data_size": 7936 00:31:15.275 }, 00:31:15.275 { 00:31:15.275 "name": "BaseBdev2", 00:31:15.275 "uuid": "120ae783-d288-49c9-adf5-e091946ef009", 00:31:15.275 "is_configured": true, 00:31:15.275 "data_offset": 256, 00:31:15.275 "data_size": 7936 00:31:15.275 } 00:31:15.275 ] 00:31:15.275 }' 00:31:15.275 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:15.275 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:15.842 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:31:15.842 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:15.842 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:15.842 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:15.842 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:15.842 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:31:15.842 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:15.842 11:13:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:16.101 [2024-07-25 11:13:23.146177] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:16.101 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:16.101 "name": "Existed_Raid", 00:31:16.101 "aliases": [ 00:31:16.101 "7e9f548e-5a7b-442b-b533-445521bd860a" 00:31:16.101 ], 00:31:16.101 "product_name": "Raid Volume", 00:31:16.101 "block_size": 4096, 00:31:16.101 "num_blocks": 7936, 00:31:16.101 "uuid": "7e9f548e-5a7b-442b-b533-445521bd860a", 00:31:16.101 "assigned_rate_limits": { 00:31:16.101 "rw_ios_per_sec": 0, 00:31:16.101 "rw_mbytes_per_sec": 0, 00:31:16.101 "r_mbytes_per_sec": 0, 00:31:16.101 "w_mbytes_per_sec": 0 00:31:16.101 }, 00:31:16.101 "claimed": false, 00:31:16.101 "zoned": false, 00:31:16.101 "supported_io_types": { 00:31:16.101 "read": true, 00:31:16.101 "write": true, 00:31:16.101 "unmap": false, 00:31:16.101 "flush": false, 00:31:16.101 "reset": true, 00:31:16.101 "nvme_admin": false, 00:31:16.101 "nvme_io": false, 00:31:16.101 "nvme_io_md": false, 00:31:16.101 "write_zeroes": true, 00:31:16.101 "zcopy": false, 00:31:16.101 "get_zone_info": false, 00:31:16.101 "zone_management": false, 00:31:16.101 "zone_append": false, 00:31:16.101 "compare": false, 00:31:16.101 "compare_and_write": false, 00:31:16.101 "abort": false, 00:31:16.101 "seek_hole": false, 00:31:16.101 "seek_data": false, 00:31:16.101 "copy": false, 00:31:16.101 "nvme_iov_md": false 00:31:16.101 }, 00:31:16.101 "memory_domains": [ 00:31:16.101 { 00:31:16.101 "dma_device_id": "system", 00:31:16.101 "dma_device_type": 1 00:31:16.101 }, 00:31:16.101 { 00:31:16.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:16.101 "dma_device_type": 2 00:31:16.101 }, 00:31:16.101 { 00:31:16.101 "dma_device_id": "system", 00:31:16.101 "dma_device_type": 1 00:31:16.101 }, 00:31:16.101 { 00:31:16.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:16.101 "dma_device_type": 2 00:31:16.101 } 00:31:16.101 ], 00:31:16.101 "driver_specific": { 00:31:16.101 "raid": { 00:31:16.101 "uuid": "7e9f548e-5a7b-442b-b533-445521bd860a", 00:31:16.101 "strip_size_kb": 0, 00:31:16.101 "state": "online", 00:31:16.101 "raid_level": "raid1", 00:31:16.101 "superblock": true, 00:31:16.101 "num_base_bdevs": 2, 00:31:16.101 "num_base_bdevs_discovered": 2, 00:31:16.101 "num_base_bdevs_operational": 2, 00:31:16.101 "base_bdevs_list": [ 00:31:16.101 { 00:31:16.101 "name": "BaseBdev1", 00:31:16.101 "uuid": "c1d48545-3b50-4ea2-bd35-6742d1905ad8", 00:31:16.101 "is_configured": true, 00:31:16.101 "data_offset": 256, 00:31:16.101 "data_size": 7936 00:31:16.101 }, 00:31:16.101 { 00:31:16.101 "name": "BaseBdev2", 00:31:16.101 "uuid": "120ae783-d288-49c9-adf5-e091946ef009", 00:31:16.101 "is_configured": true, 00:31:16.101 "data_offset": 256, 00:31:16.101 "data_size": 7936 00:31:16.101 } 00:31:16.101 ] 00:31:16.101 } 00:31:16.101 } 00:31:16.101 }' 00:31:16.101 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:16.101 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:31:16.101 BaseBdev2' 00:31:16.101 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:16.101 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:16.101 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:16.360 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:16.360 "name": "BaseBdev1", 00:31:16.360 "aliases": [ 00:31:16.360 "c1d48545-3b50-4ea2-bd35-6742d1905ad8" 00:31:16.360 ], 00:31:16.360 "product_name": "Malloc disk", 00:31:16.360 "block_size": 4096, 00:31:16.360 "num_blocks": 8192, 00:31:16.360 "uuid": "c1d48545-3b50-4ea2-bd35-6742d1905ad8", 00:31:16.360 "assigned_rate_limits": { 00:31:16.360 "rw_ios_per_sec": 0, 00:31:16.360 "rw_mbytes_per_sec": 0, 00:31:16.360 "r_mbytes_per_sec": 0, 00:31:16.360 "w_mbytes_per_sec": 0 00:31:16.360 }, 00:31:16.360 "claimed": true, 00:31:16.360 "claim_type": "exclusive_write", 00:31:16.360 "zoned": false, 00:31:16.360 "supported_io_types": { 00:31:16.360 "read": true, 00:31:16.360 "write": true, 00:31:16.360 "unmap": true, 00:31:16.360 "flush": true, 00:31:16.360 "reset": true, 00:31:16.360 "nvme_admin": false, 00:31:16.360 "nvme_io": false, 00:31:16.360 "nvme_io_md": false, 00:31:16.360 "write_zeroes": true, 00:31:16.360 "zcopy": true, 00:31:16.360 "get_zone_info": false, 00:31:16.360 "zone_management": false, 00:31:16.360 "zone_append": false, 00:31:16.360 "compare": false, 00:31:16.360 "compare_and_write": false, 00:31:16.360 "abort": true, 00:31:16.360 "seek_hole": false, 00:31:16.360 "seek_data": false, 00:31:16.360 "copy": true, 00:31:16.360 "nvme_iov_md": false 00:31:16.360 }, 00:31:16.360 "memory_domains": [ 00:31:16.360 { 00:31:16.360 "dma_device_id": "system", 00:31:16.360 "dma_device_type": 1 00:31:16.360 }, 00:31:16.360 { 00:31:16.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:16.360 "dma_device_type": 2 00:31:16.360 } 00:31:16.360 ], 00:31:16.360 "driver_specific": {} 00:31:16.360 }' 00:31:16.360 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:16.618 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:16.618 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:16.618 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:16.618 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:16.618 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:16.618 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:16.618 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:16.618 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:16.618 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:16.876 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:16.876 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:16.876 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:16.876 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:16.876 11:13:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:17.134 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:17.134 "name": "BaseBdev2", 00:31:17.134 "aliases": [ 00:31:17.134 "120ae783-d288-49c9-adf5-e091946ef009" 00:31:17.134 ], 00:31:17.134 "product_name": "Malloc disk", 00:31:17.134 "block_size": 4096, 00:31:17.134 "num_blocks": 8192, 00:31:17.134 "uuid": "120ae783-d288-49c9-adf5-e091946ef009", 00:31:17.134 "assigned_rate_limits": { 00:31:17.134 "rw_ios_per_sec": 0, 00:31:17.134 "rw_mbytes_per_sec": 0, 00:31:17.134 "r_mbytes_per_sec": 0, 00:31:17.134 "w_mbytes_per_sec": 0 00:31:17.134 }, 00:31:17.134 "claimed": true, 00:31:17.134 "claim_type": "exclusive_write", 00:31:17.135 "zoned": false, 00:31:17.135 "supported_io_types": { 00:31:17.135 "read": true, 00:31:17.135 "write": true, 00:31:17.135 "unmap": true, 00:31:17.135 "flush": true, 00:31:17.135 "reset": true, 00:31:17.135 "nvme_admin": false, 00:31:17.135 "nvme_io": false, 00:31:17.135 "nvme_io_md": false, 00:31:17.135 "write_zeroes": true, 00:31:17.135 "zcopy": true, 00:31:17.135 "get_zone_info": false, 00:31:17.135 "zone_management": false, 00:31:17.135 "zone_append": false, 00:31:17.135 "compare": false, 00:31:17.135 "compare_and_write": false, 00:31:17.135 "abort": true, 00:31:17.135 "seek_hole": false, 00:31:17.135 "seek_data": false, 00:31:17.135 "copy": true, 00:31:17.135 "nvme_iov_md": false 00:31:17.135 }, 00:31:17.135 "memory_domains": [ 00:31:17.135 { 00:31:17.135 "dma_device_id": "system", 00:31:17.135 "dma_device_type": 1 00:31:17.135 }, 00:31:17.135 { 00:31:17.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:17.135 "dma_device_type": 2 00:31:17.135 } 00:31:17.135 ], 00:31:17.135 "driver_specific": {} 00:31:17.135 }' 00:31:17.135 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:17.135 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:17.135 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:17.135 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:17.135 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:17.135 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:17.135 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:17.135 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:17.393 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:17.393 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:17.393 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:17.393 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:17.393 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:17.652 [2024-07-25 11:13:24.569796] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:17.652 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:17.911 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:17.911 "name": "Existed_Raid", 00:31:17.911 "uuid": "7e9f548e-5a7b-442b-b533-445521bd860a", 00:31:17.911 "strip_size_kb": 0, 00:31:17.911 "state": "online", 00:31:17.911 "raid_level": "raid1", 00:31:17.911 "superblock": true, 00:31:17.911 "num_base_bdevs": 2, 00:31:17.911 "num_base_bdevs_discovered": 1, 00:31:17.911 "num_base_bdevs_operational": 1, 00:31:17.911 "base_bdevs_list": [ 00:31:17.911 { 00:31:17.911 "name": null, 00:31:17.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:17.911 "is_configured": false, 00:31:17.911 "data_offset": 256, 00:31:17.911 "data_size": 7936 00:31:17.911 }, 00:31:17.911 { 00:31:17.911 "name": "BaseBdev2", 00:31:17.911 "uuid": "120ae783-d288-49c9-adf5-e091946ef009", 00:31:17.911 "is_configured": true, 00:31:17.911 "data_offset": 256, 00:31:17.911 "data_size": 7936 00:31:17.911 } 00:31:17.911 ] 00:31:17.911 }' 00:31:17.911 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:17.911 11:13:24 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:18.512 11:13:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:31:18.512 11:13:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:18.512 11:13:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.512 11:13:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:18.771 11:13:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:18.771 11:13:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:18.771 11:13:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:18.771 [2024-07-25 11:13:25.875528] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:18.771 [2024-07-25 11:13:25.875644] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:19.030 [2024-07-25 11:13:26.005558] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:19.030 [2024-07-25 11:13:26.005613] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:19.030 [2024-07-25 11:13:26.005632] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:31:19.030 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:19.030 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:19.030 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.030 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 3737164 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 3737164 ']' 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 3737164 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3737164 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3737164' 00:31:19.290 killing process with pid 3737164 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 3737164 00:31:19.290 [2024-07-25 11:13:26.311003] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:19.290 11:13:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 3737164 00:31:19.290 [2024-07-25 11:13:26.336295] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:21.197 11:13:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:31:21.197 00:31:21.197 real 0m12.070s 00:31:21.197 user 0m19.695s 00:31:21.197 sys 0m2.080s 00:31:21.197 11:13:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:21.197 11:13:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:21.197 ************************************ 00:31:21.197 END TEST raid_state_function_test_sb_4k 00:31:21.197 ************************************ 00:31:21.197 11:13:28 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:31:21.197 11:13:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:21.197 11:13:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:21.197 11:13:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:21.197 ************************************ 00:31:21.197 START TEST raid_superblock_test_4k 00:31:21.197 ************************************ 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@414 -- # local strip_size 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@427 -- # raid_pid=3739250 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@428 -- # waitforlisten 3739250 /var/tmp/spdk-raid.sock 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 3739250 ']' 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:21.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:21.197 11:13:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:21.197 [2024-07-25 11:13:28.260265] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:21.197 [2024-07-25 11:13:28.260414] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3739250 ] 00:31:21.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.455 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:21.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.455 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:21.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.455 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:21.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.455 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:21.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.455 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:21.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.455 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:21.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.455 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:21.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.455 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:21.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.455 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:21.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.455 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:21.455 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.455 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:21.456 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:21.456 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:21.456 [2024-07-25 11:13:28.456029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.714 [2024-07-25 11:13:28.724877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.972 [2024-07-25 11:13:29.031834] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:21.972 [2024-07-25 11:13:29.031873] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:22.230 11:13:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:22.230 11:13:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:31:22.230 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:31:22.230 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:22.230 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:31:22.230 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:31:22.230 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:22.230 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:22.230 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:31:22.230 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:22.230 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:31:22.488 malloc1 00:31:22.488 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:22.488 [2024-07-25 11:13:29.556087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:22.488 [2024-07-25 11:13:29.556157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:22.488 [2024-07-25 11:13:29.556188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:31:22.488 [2024-07-25 11:13:29.556206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:22.488 [2024-07-25 11:13:29.558900] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:22.488 [2024-07-25 11:13:29.558933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:22.488 pt1 00:31:22.488 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:31:22.488 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:22.488 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:31:22.488 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:31:22.488 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:22.488 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:22.488 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:31:22.488 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:22.488 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:31:22.746 malloc2 00:31:22.746 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:23.005 [2024-07-25 11:13:29.938189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:23.005 [2024-07-25 11:13:29.938238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:23.005 [2024-07-25 11:13:29.938267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:31:23.005 [2024-07-25 11:13:29.938283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:23.005 [2024-07-25 11:13:29.940955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:23.005 [2024-07-25 11:13:29.940992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:23.005 pt2 00:31:23.005 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:31:23.005 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:31:23.005 11:13:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:31:23.005 [2024-07-25 11:13:30.114693] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:23.005 [2024-07-25 11:13:30.117007] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:23.005 [2024-07-25 11:13:30.117226] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:23.005 [2024-07-25 11:13:30.117245] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:23.005 [2024-07-25 11:13:30.117593] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:31:23.005 [2024-07-25 11:13:30.117850] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:23.005 [2024-07-25 11:13:30.117869] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:23.005 [2024-07-25 11:13:30.118078] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:23.264 "name": "raid_bdev1", 00:31:23.264 "uuid": "6fb9b41d-7927-4e78-ad37-a708d4cb6bf1", 00:31:23.264 "strip_size_kb": 0, 00:31:23.264 "state": "online", 00:31:23.264 "raid_level": "raid1", 00:31:23.264 "superblock": true, 00:31:23.264 "num_base_bdevs": 2, 00:31:23.264 "num_base_bdevs_discovered": 2, 00:31:23.264 "num_base_bdevs_operational": 2, 00:31:23.264 "base_bdevs_list": [ 00:31:23.264 { 00:31:23.264 "name": "pt1", 00:31:23.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:23.264 "is_configured": true, 00:31:23.264 "data_offset": 256, 00:31:23.264 "data_size": 7936 00:31:23.264 }, 00:31:23.264 { 00:31:23.264 "name": "pt2", 00:31:23.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:23.264 "is_configured": true, 00:31:23.264 "data_offset": 256, 00:31:23.264 "data_size": 7936 00:31:23.264 } 00:31:23.264 ] 00:31:23.264 }' 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:23.264 11:13:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:23.829 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:31:23.829 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:23.829 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:23.829 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:23.829 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:23.829 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:31:23.829 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:23.829 11:13:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:24.087 [2024-07-25 11:13:31.149899] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:24.087 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:24.087 "name": "raid_bdev1", 00:31:24.087 "aliases": [ 00:31:24.087 "6fb9b41d-7927-4e78-ad37-a708d4cb6bf1" 00:31:24.087 ], 00:31:24.087 "product_name": "Raid Volume", 00:31:24.087 "block_size": 4096, 00:31:24.087 "num_blocks": 7936, 00:31:24.087 "uuid": "6fb9b41d-7927-4e78-ad37-a708d4cb6bf1", 00:31:24.087 "assigned_rate_limits": { 00:31:24.087 "rw_ios_per_sec": 0, 00:31:24.087 "rw_mbytes_per_sec": 0, 00:31:24.087 "r_mbytes_per_sec": 0, 00:31:24.087 "w_mbytes_per_sec": 0 00:31:24.087 }, 00:31:24.087 "claimed": false, 00:31:24.087 "zoned": false, 00:31:24.087 "supported_io_types": { 00:31:24.087 "read": true, 00:31:24.087 "write": true, 00:31:24.087 "unmap": false, 00:31:24.087 "flush": false, 00:31:24.087 "reset": true, 00:31:24.087 "nvme_admin": false, 00:31:24.087 "nvme_io": false, 00:31:24.087 "nvme_io_md": false, 00:31:24.087 "write_zeroes": true, 00:31:24.087 "zcopy": false, 00:31:24.087 "get_zone_info": false, 00:31:24.087 "zone_management": false, 00:31:24.087 "zone_append": false, 00:31:24.087 "compare": false, 00:31:24.087 "compare_and_write": false, 00:31:24.087 "abort": false, 00:31:24.087 "seek_hole": false, 00:31:24.087 "seek_data": false, 00:31:24.087 "copy": false, 00:31:24.087 "nvme_iov_md": false 00:31:24.087 }, 00:31:24.087 "memory_domains": [ 00:31:24.087 { 00:31:24.087 "dma_device_id": "system", 00:31:24.087 "dma_device_type": 1 00:31:24.087 }, 00:31:24.087 { 00:31:24.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:24.087 "dma_device_type": 2 00:31:24.087 }, 00:31:24.087 { 00:31:24.087 "dma_device_id": "system", 00:31:24.087 "dma_device_type": 1 00:31:24.087 }, 00:31:24.087 { 00:31:24.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:24.087 "dma_device_type": 2 00:31:24.087 } 00:31:24.087 ], 00:31:24.087 "driver_specific": { 00:31:24.087 "raid": { 00:31:24.087 "uuid": "6fb9b41d-7927-4e78-ad37-a708d4cb6bf1", 00:31:24.087 "strip_size_kb": 0, 00:31:24.087 "state": "online", 00:31:24.087 "raid_level": "raid1", 00:31:24.087 "superblock": true, 00:31:24.087 "num_base_bdevs": 2, 00:31:24.087 "num_base_bdevs_discovered": 2, 00:31:24.087 "num_base_bdevs_operational": 2, 00:31:24.087 "base_bdevs_list": [ 00:31:24.087 { 00:31:24.087 "name": "pt1", 00:31:24.087 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:24.087 "is_configured": true, 00:31:24.087 "data_offset": 256, 00:31:24.087 "data_size": 7936 00:31:24.087 }, 00:31:24.087 { 00:31:24.087 "name": "pt2", 00:31:24.087 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:24.087 "is_configured": true, 00:31:24.087 "data_offset": 256, 00:31:24.087 "data_size": 7936 00:31:24.087 } 00:31:24.087 ] 00:31:24.087 } 00:31:24.087 } 00:31:24.087 }' 00:31:24.087 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:24.346 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:24.346 pt2' 00:31:24.346 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:24.346 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:24.346 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:24.346 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:24.346 "name": "pt1", 00:31:24.346 "aliases": [ 00:31:24.346 "00000000-0000-0000-0000-000000000001" 00:31:24.346 ], 00:31:24.346 "product_name": "passthru", 00:31:24.346 "block_size": 4096, 00:31:24.346 "num_blocks": 8192, 00:31:24.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:24.346 "assigned_rate_limits": { 00:31:24.346 "rw_ios_per_sec": 0, 00:31:24.346 "rw_mbytes_per_sec": 0, 00:31:24.346 "r_mbytes_per_sec": 0, 00:31:24.346 "w_mbytes_per_sec": 0 00:31:24.346 }, 00:31:24.346 "claimed": true, 00:31:24.346 "claim_type": "exclusive_write", 00:31:24.346 "zoned": false, 00:31:24.346 "supported_io_types": { 00:31:24.346 "read": true, 00:31:24.346 "write": true, 00:31:24.346 "unmap": true, 00:31:24.346 "flush": true, 00:31:24.346 "reset": true, 00:31:24.346 "nvme_admin": false, 00:31:24.346 "nvme_io": false, 00:31:24.346 "nvme_io_md": false, 00:31:24.346 "write_zeroes": true, 00:31:24.346 "zcopy": true, 00:31:24.346 "get_zone_info": false, 00:31:24.346 "zone_management": false, 00:31:24.346 "zone_append": false, 00:31:24.346 "compare": false, 00:31:24.346 "compare_and_write": false, 00:31:24.346 "abort": true, 00:31:24.346 "seek_hole": false, 00:31:24.346 "seek_data": false, 00:31:24.346 "copy": true, 00:31:24.346 "nvme_iov_md": false 00:31:24.346 }, 00:31:24.346 "memory_domains": [ 00:31:24.346 { 00:31:24.346 "dma_device_id": "system", 00:31:24.346 "dma_device_type": 1 00:31:24.346 }, 00:31:24.346 { 00:31:24.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:24.346 "dma_device_type": 2 00:31:24.346 } 00:31:24.346 ], 00:31:24.346 "driver_specific": { 00:31:24.346 "passthru": { 00:31:24.346 "name": "pt1", 00:31:24.346 "base_bdev_name": "malloc1" 00:31:24.346 } 00:31:24.346 } 00:31:24.346 }' 00:31:24.346 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:24.346 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:24.346 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:24.346 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:24.605 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:24.605 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:24.605 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:24.605 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:24.605 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:24.605 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:24.605 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:24.605 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:24.605 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:24.605 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:24.605 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:24.864 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:24.864 "name": "pt2", 00:31:24.864 "aliases": [ 00:31:24.864 "00000000-0000-0000-0000-000000000002" 00:31:24.864 ], 00:31:24.864 "product_name": "passthru", 00:31:24.864 "block_size": 4096, 00:31:24.864 "num_blocks": 8192, 00:31:24.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:24.864 "assigned_rate_limits": { 00:31:24.864 "rw_ios_per_sec": 0, 00:31:24.864 "rw_mbytes_per_sec": 0, 00:31:24.864 "r_mbytes_per_sec": 0, 00:31:24.864 "w_mbytes_per_sec": 0 00:31:24.864 }, 00:31:24.864 "claimed": true, 00:31:24.864 "claim_type": "exclusive_write", 00:31:24.864 "zoned": false, 00:31:24.864 "supported_io_types": { 00:31:24.864 "read": true, 00:31:24.864 "write": true, 00:31:24.864 "unmap": true, 00:31:24.864 "flush": true, 00:31:24.864 "reset": true, 00:31:24.864 "nvme_admin": false, 00:31:24.864 "nvme_io": false, 00:31:24.864 "nvme_io_md": false, 00:31:24.864 "write_zeroes": true, 00:31:24.864 "zcopy": true, 00:31:24.864 "get_zone_info": false, 00:31:24.864 "zone_management": false, 00:31:24.864 "zone_append": false, 00:31:24.864 "compare": false, 00:31:24.864 "compare_and_write": false, 00:31:24.864 "abort": true, 00:31:24.864 "seek_hole": false, 00:31:24.864 "seek_data": false, 00:31:24.864 "copy": true, 00:31:24.864 "nvme_iov_md": false 00:31:24.864 }, 00:31:24.864 "memory_domains": [ 00:31:24.864 { 00:31:24.864 "dma_device_id": "system", 00:31:24.864 "dma_device_type": 1 00:31:24.864 }, 00:31:24.864 { 00:31:24.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:24.864 "dma_device_type": 2 00:31:24.864 } 00:31:24.864 ], 00:31:24.864 "driver_specific": { 00:31:24.864 "passthru": { 00:31:24.864 "name": "pt2", 00:31:24.864 "base_bdev_name": "malloc2" 00:31:24.864 } 00:31:24.864 } 00:31:24.864 }' 00:31:24.864 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:25.123 11:13:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:25.123 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:25.123 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:25.123 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:25.123 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:25.123 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:25.123 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:25.123 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:25.123 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:25.123 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:25.382 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:25.382 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:25.382 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:31:25.382 [2024-07-25 11:13:32.477514] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:25.382 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=6fb9b41d-7927-4e78-ad37-a708d4cb6bf1 00:31:25.382 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' -z 6fb9b41d-7927-4e78-ad37-a708d4cb6bf1 ']' 00:31:25.382 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:25.642 [2024-07-25 11:13:32.705782] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:25.642 [2024-07-25 11:13:32.705816] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:25.642 [2024-07-25 11:13:32.705912] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:25.642 [2024-07-25 11:13:32.705987] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:25.642 [2024-07-25 11:13:32.706014] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:25.642 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.642 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:31:25.900 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:31:25.900 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:31:25.900 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:31:25.900 11:13:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:26.159 11:13:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:31:26.159 11:13:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:26.417 11:13:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:26.417 11:13:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:26.675 11:13:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:31:26.676 11:13:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:26.676 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:31:26.676 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:26.676 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:26.676 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:26.676 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:26.676 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:26.676 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:26.676 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:26.676 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:31:26.676 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:31:26.676 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:26.935 [2024-07-25 11:13:33.856844] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:26.935 [2024-07-25 11:13:33.859275] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:26.935 [2024-07-25 11:13:33.859355] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:26.935 [2024-07-25 11:13:33.859415] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:26.935 [2024-07-25 11:13:33.859438] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:26.935 [2024-07-25 11:13:33.859454] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:31:26.935 request: 00:31:26.935 { 00:31:26.935 "name": "raid_bdev1", 00:31:26.935 "raid_level": "raid1", 00:31:26.935 "base_bdevs": [ 00:31:26.935 "malloc1", 00:31:26.935 "malloc2" 00:31:26.935 ], 00:31:26.935 "superblock": false, 00:31:26.935 "method": "bdev_raid_create", 00:31:26.935 "req_id": 1 00:31:26.935 } 00:31:26.935 Got JSON-RPC error response 00:31:26.935 response: 00:31:26.935 { 00:31:26.935 "code": -17, 00:31:26.935 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:26.935 } 00:31:26.935 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:31:26.935 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:26.935 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:26.935 11:13:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:26.935 11:13:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.935 11:13:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:31:27.193 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:31:27.193 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:31:27.193 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:27.193 [2024-07-25 11:13:34.309971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:27.193 [2024-07-25 11:13:34.310047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:27.193 [2024-07-25 11:13:34.310072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:31:27.193 [2024-07-25 11:13:34.310090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:27.451 [2024-07-25 11:13:34.312904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:27.451 [2024-07-25 11:13:34.312944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:27.451 [2024-07-25 11:13:34.313040] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:27.451 [2024-07-25 11:13:34.313159] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:27.451 pt1 00:31:27.451 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:27.451 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:27.451 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:27.452 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:27.452 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:27.452 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:27.452 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:27.452 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:27.452 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:27.452 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:27.452 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:27.452 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.452 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:27.452 "name": "raid_bdev1", 00:31:27.452 "uuid": "6fb9b41d-7927-4e78-ad37-a708d4cb6bf1", 00:31:27.452 "strip_size_kb": 0, 00:31:27.452 "state": "configuring", 00:31:27.452 "raid_level": "raid1", 00:31:27.452 "superblock": true, 00:31:27.452 "num_base_bdevs": 2, 00:31:27.452 "num_base_bdevs_discovered": 1, 00:31:27.452 "num_base_bdevs_operational": 2, 00:31:27.452 "base_bdevs_list": [ 00:31:27.452 { 00:31:27.452 "name": "pt1", 00:31:27.452 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:27.452 "is_configured": true, 00:31:27.452 "data_offset": 256, 00:31:27.452 "data_size": 7936 00:31:27.452 }, 00:31:27.452 { 00:31:27.452 "name": null, 00:31:27.452 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:27.452 "is_configured": false, 00:31:27.452 "data_offset": 256, 00:31:27.452 "data_size": 7936 00:31:27.452 } 00:31:27.452 ] 00:31:27.452 }' 00:31:27.452 11:13:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:27.452 11:13:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:28.020 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:31:28.020 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:31:28.020 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:31:28.020 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:28.278 [2024-07-25 11:13:35.320710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:28.278 [2024-07-25 11:13:35.320787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:28.278 [2024-07-25 11:13:35.320816] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:31:28.278 [2024-07-25 11:13:35.320835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:28.278 [2024-07-25 11:13:35.321460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:28.278 [2024-07-25 11:13:35.321491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:28.278 [2024-07-25 11:13:35.321600] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:28.278 [2024-07-25 11:13:35.321640] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:28.278 [2024-07-25 11:13:35.321810] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:31:28.278 [2024-07-25 11:13:35.321829] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:28.278 [2024-07-25 11:13:35.322154] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:31:28.278 [2024-07-25 11:13:35.322403] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:31:28.278 [2024-07-25 11:13:35.322417] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:31:28.278 [2024-07-25 11:13:35.322617] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:28.278 pt2 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.278 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.537 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:28.537 "name": "raid_bdev1", 00:31:28.537 "uuid": "6fb9b41d-7927-4e78-ad37-a708d4cb6bf1", 00:31:28.537 "strip_size_kb": 0, 00:31:28.537 "state": "online", 00:31:28.537 "raid_level": "raid1", 00:31:28.537 "superblock": true, 00:31:28.537 "num_base_bdevs": 2, 00:31:28.537 "num_base_bdevs_discovered": 2, 00:31:28.537 "num_base_bdevs_operational": 2, 00:31:28.537 "base_bdevs_list": [ 00:31:28.537 { 00:31:28.537 "name": "pt1", 00:31:28.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:28.537 "is_configured": true, 00:31:28.537 "data_offset": 256, 00:31:28.537 "data_size": 7936 00:31:28.537 }, 00:31:28.537 { 00:31:28.537 "name": "pt2", 00:31:28.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:28.537 "is_configured": true, 00:31:28.537 "data_offset": 256, 00:31:28.537 "data_size": 7936 00:31:28.537 } 00:31:28.537 ] 00:31:28.537 }' 00:31:28.537 11:13:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:28.537 11:13:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:29.123 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:31:29.123 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:29.123 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:29.123 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:29.123 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:29.123 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:31:29.123 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:29.123 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:29.382 [2024-07-25 11:13:36.331806] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:29.382 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:29.382 "name": "raid_bdev1", 00:31:29.382 "aliases": [ 00:31:29.382 "6fb9b41d-7927-4e78-ad37-a708d4cb6bf1" 00:31:29.382 ], 00:31:29.382 "product_name": "Raid Volume", 00:31:29.382 "block_size": 4096, 00:31:29.382 "num_blocks": 7936, 00:31:29.382 "uuid": "6fb9b41d-7927-4e78-ad37-a708d4cb6bf1", 00:31:29.382 "assigned_rate_limits": { 00:31:29.382 "rw_ios_per_sec": 0, 00:31:29.382 "rw_mbytes_per_sec": 0, 00:31:29.382 "r_mbytes_per_sec": 0, 00:31:29.382 "w_mbytes_per_sec": 0 00:31:29.382 }, 00:31:29.382 "claimed": false, 00:31:29.382 "zoned": false, 00:31:29.382 "supported_io_types": { 00:31:29.382 "read": true, 00:31:29.382 "write": true, 00:31:29.382 "unmap": false, 00:31:29.382 "flush": false, 00:31:29.382 "reset": true, 00:31:29.382 "nvme_admin": false, 00:31:29.382 "nvme_io": false, 00:31:29.382 "nvme_io_md": false, 00:31:29.382 "write_zeroes": true, 00:31:29.382 "zcopy": false, 00:31:29.382 "get_zone_info": false, 00:31:29.382 "zone_management": false, 00:31:29.382 "zone_append": false, 00:31:29.382 "compare": false, 00:31:29.382 "compare_and_write": false, 00:31:29.382 "abort": false, 00:31:29.382 "seek_hole": false, 00:31:29.382 "seek_data": false, 00:31:29.382 "copy": false, 00:31:29.382 "nvme_iov_md": false 00:31:29.382 }, 00:31:29.382 "memory_domains": [ 00:31:29.382 { 00:31:29.382 "dma_device_id": "system", 00:31:29.382 "dma_device_type": 1 00:31:29.382 }, 00:31:29.382 { 00:31:29.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:29.382 "dma_device_type": 2 00:31:29.382 }, 00:31:29.382 { 00:31:29.382 "dma_device_id": "system", 00:31:29.382 "dma_device_type": 1 00:31:29.382 }, 00:31:29.382 { 00:31:29.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:29.382 "dma_device_type": 2 00:31:29.382 } 00:31:29.382 ], 00:31:29.382 "driver_specific": { 00:31:29.382 "raid": { 00:31:29.382 "uuid": "6fb9b41d-7927-4e78-ad37-a708d4cb6bf1", 00:31:29.382 "strip_size_kb": 0, 00:31:29.382 "state": "online", 00:31:29.382 "raid_level": "raid1", 00:31:29.382 "superblock": true, 00:31:29.382 "num_base_bdevs": 2, 00:31:29.382 "num_base_bdevs_discovered": 2, 00:31:29.382 "num_base_bdevs_operational": 2, 00:31:29.382 "base_bdevs_list": [ 00:31:29.382 { 00:31:29.382 "name": "pt1", 00:31:29.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:29.382 "is_configured": true, 00:31:29.382 "data_offset": 256, 00:31:29.382 "data_size": 7936 00:31:29.382 }, 00:31:29.382 { 00:31:29.382 "name": "pt2", 00:31:29.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:29.382 "is_configured": true, 00:31:29.382 "data_offset": 256, 00:31:29.382 "data_size": 7936 00:31:29.382 } 00:31:29.382 ] 00:31:29.382 } 00:31:29.382 } 00:31:29.382 }' 00:31:29.382 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:29.382 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:29.382 pt2' 00:31:29.382 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:29.382 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:29.382 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:29.640 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:29.640 "name": "pt1", 00:31:29.640 "aliases": [ 00:31:29.640 "00000000-0000-0000-0000-000000000001" 00:31:29.640 ], 00:31:29.640 "product_name": "passthru", 00:31:29.640 "block_size": 4096, 00:31:29.640 "num_blocks": 8192, 00:31:29.640 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:29.640 "assigned_rate_limits": { 00:31:29.640 "rw_ios_per_sec": 0, 00:31:29.640 "rw_mbytes_per_sec": 0, 00:31:29.640 "r_mbytes_per_sec": 0, 00:31:29.640 "w_mbytes_per_sec": 0 00:31:29.640 }, 00:31:29.640 "claimed": true, 00:31:29.640 "claim_type": "exclusive_write", 00:31:29.640 "zoned": false, 00:31:29.640 "supported_io_types": { 00:31:29.640 "read": true, 00:31:29.640 "write": true, 00:31:29.640 "unmap": true, 00:31:29.640 "flush": true, 00:31:29.640 "reset": true, 00:31:29.640 "nvme_admin": false, 00:31:29.640 "nvme_io": false, 00:31:29.640 "nvme_io_md": false, 00:31:29.640 "write_zeroes": true, 00:31:29.640 "zcopy": true, 00:31:29.640 "get_zone_info": false, 00:31:29.640 "zone_management": false, 00:31:29.640 "zone_append": false, 00:31:29.640 "compare": false, 00:31:29.640 "compare_and_write": false, 00:31:29.640 "abort": true, 00:31:29.640 "seek_hole": false, 00:31:29.640 "seek_data": false, 00:31:29.640 "copy": true, 00:31:29.640 "nvme_iov_md": false 00:31:29.640 }, 00:31:29.640 "memory_domains": [ 00:31:29.640 { 00:31:29.640 "dma_device_id": "system", 00:31:29.640 "dma_device_type": 1 00:31:29.640 }, 00:31:29.640 { 00:31:29.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:29.640 "dma_device_type": 2 00:31:29.640 } 00:31:29.640 ], 00:31:29.640 "driver_specific": { 00:31:29.640 "passthru": { 00:31:29.640 "name": "pt1", 00:31:29.640 "base_bdev_name": "malloc1" 00:31:29.640 } 00:31:29.640 } 00:31:29.640 }' 00:31:29.640 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:29.640 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:29.640 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:29.640 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:29.641 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:29.898 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:29.898 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:29.898 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:29.898 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:29.898 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:29.898 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:29.898 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:29.898 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:29.898 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:29.898 11:13:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:30.157 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:30.157 "name": "pt2", 00:31:30.157 "aliases": [ 00:31:30.157 "00000000-0000-0000-0000-000000000002" 00:31:30.157 ], 00:31:30.157 "product_name": "passthru", 00:31:30.157 "block_size": 4096, 00:31:30.157 "num_blocks": 8192, 00:31:30.157 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:30.157 "assigned_rate_limits": { 00:31:30.157 "rw_ios_per_sec": 0, 00:31:30.157 "rw_mbytes_per_sec": 0, 00:31:30.157 "r_mbytes_per_sec": 0, 00:31:30.157 "w_mbytes_per_sec": 0 00:31:30.157 }, 00:31:30.157 "claimed": true, 00:31:30.157 "claim_type": "exclusive_write", 00:31:30.157 "zoned": false, 00:31:30.157 "supported_io_types": { 00:31:30.157 "read": true, 00:31:30.157 "write": true, 00:31:30.157 "unmap": true, 00:31:30.157 "flush": true, 00:31:30.157 "reset": true, 00:31:30.157 "nvme_admin": false, 00:31:30.157 "nvme_io": false, 00:31:30.157 "nvme_io_md": false, 00:31:30.157 "write_zeroes": true, 00:31:30.157 "zcopy": true, 00:31:30.157 "get_zone_info": false, 00:31:30.157 "zone_management": false, 00:31:30.157 "zone_append": false, 00:31:30.157 "compare": false, 00:31:30.157 "compare_and_write": false, 00:31:30.157 "abort": true, 00:31:30.157 "seek_hole": false, 00:31:30.157 "seek_data": false, 00:31:30.157 "copy": true, 00:31:30.157 "nvme_iov_md": false 00:31:30.157 }, 00:31:30.157 "memory_domains": [ 00:31:30.157 { 00:31:30.157 "dma_device_id": "system", 00:31:30.157 "dma_device_type": 1 00:31:30.157 }, 00:31:30.157 { 00:31:30.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:30.157 "dma_device_type": 2 00:31:30.157 } 00:31:30.157 ], 00:31:30.157 "driver_specific": { 00:31:30.157 "passthru": { 00:31:30.157 "name": "pt2", 00:31:30.157 "base_bdev_name": "malloc2" 00:31:30.157 } 00:31:30.157 } 00:31:30.157 }' 00:31:30.157 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:30.157 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:30.157 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:30.157 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:30.414 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:30.414 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:30.414 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:30.414 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:30.414 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:30.414 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:30.414 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:30.414 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:30.414 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:30.414 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:31:30.672 [2024-07-25 11:13:37.735532] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:30.672 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@502 -- # '[' 6fb9b41d-7927-4e78-ad37-a708d4cb6bf1 '!=' 6fb9b41d-7927-4e78-ad37-a708d4cb6bf1 ']' 00:31:30.672 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:31:30.672 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:30.672 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:31:30.672 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:30.931 [2024-07-25 11:13:37.971887] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:30.931 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:30.931 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:30.931 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:30.931 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:30.931 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:30.932 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:30.932 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:30.932 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:30.932 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:30.932 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:30.932 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:30.932 11:13:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.191 11:13:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:31.191 "name": "raid_bdev1", 00:31:31.191 "uuid": "6fb9b41d-7927-4e78-ad37-a708d4cb6bf1", 00:31:31.191 "strip_size_kb": 0, 00:31:31.191 "state": "online", 00:31:31.191 "raid_level": "raid1", 00:31:31.191 "superblock": true, 00:31:31.191 "num_base_bdevs": 2, 00:31:31.191 "num_base_bdevs_discovered": 1, 00:31:31.191 "num_base_bdevs_operational": 1, 00:31:31.191 "base_bdevs_list": [ 00:31:31.191 { 00:31:31.191 "name": null, 00:31:31.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:31.191 "is_configured": false, 00:31:31.191 "data_offset": 256, 00:31:31.191 "data_size": 7936 00:31:31.191 }, 00:31:31.191 { 00:31:31.191 "name": "pt2", 00:31:31.191 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:31.191 "is_configured": true, 00:31:31.191 "data_offset": 256, 00:31:31.191 "data_size": 7936 00:31:31.191 } 00:31:31.191 ] 00:31:31.191 }' 00:31:31.191 11:13:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:31.191 11:13:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:31.761 11:13:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:32.020 [2024-07-25 11:13:39.010877] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:32.020 [2024-07-25 11:13:39.010914] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:32.020 [2024-07-25 11:13:39.011006] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:32.020 [2024-07-25 11:13:39.011065] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:32.020 [2024-07-25 11:13:39.011085] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:31:32.020 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.020 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:31:32.280 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:31:32.280 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:31:32.280 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:31:32.280 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:31:32.280 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@534 -- # i=1 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:32.539 [2024-07-25 11:13:39.628491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:32.539 [2024-07-25 11:13:39.628568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.539 [2024-07-25 11:13:39.628592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:31:32.539 [2024-07-25 11:13:39.628611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.539 [2024-07-25 11:13:39.631407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.539 [2024-07-25 11:13:39.631442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:32.539 [2024-07-25 11:13:39.631535] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:32.539 [2024-07-25 11:13:39.631611] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:32.539 [2024-07-25 11:13:39.631761] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:32.539 [2024-07-25 11:13:39.631779] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:32.539 [2024-07-25 11:13:39.632072] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:31:32.539 [2024-07-25 11:13:39.632304] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:32.539 [2024-07-25 11:13:39.632330] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:31:32.539 [2024-07-25 11:13:39.632539] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:32.539 pt2 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:32.539 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.799 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:32.799 "name": "raid_bdev1", 00:31:32.799 "uuid": "6fb9b41d-7927-4e78-ad37-a708d4cb6bf1", 00:31:32.799 "strip_size_kb": 0, 00:31:32.799 "state": "online", 00:31:32.799 "raid_level": "raid1", 00:31:32.799 "superblock": true, 00:31:32.799 "num_base_bdevs": 2, 00:31:32.799 "num_base_bdevs_discovered": 1, 00:31:32.799 "num_base_bdevs_operational": 1, 00:31:32.799 "base_bdevs_list": [ 00:31:32.799 { 00:31:32.799 "name": null, 00:31:32.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.799 "is_configured": false, 00:31:32.799 "data_offset": 256, 00:31:32.799 "data_size": 7936 00:31:32.799 }, 00:31:32.799 { 00:31:32.799 "name": "pt2", 00:31:32.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:32.799 "is_configured": true, 00:31:32.799 "data_offset": 256, 00:31:32.799 "data_size": 7936 00:31:32.799 } 00:31:32.799 ] 00:31:32.799 }' 00:31:32.799 11:13:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:32.799 11:13:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:33.367 11:13:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:33.626 [2024-07-25 11:13:40.683459] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:33.626 [2024-07-25 11:13:40.683499] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:33.626 [2024-07-25 11:13:40.683587] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:33.626 [2024-07-25 11:13:40.683653] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:33.626 [2024-07-25 11:13:40.683669] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:31:33.626 11:13:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.626 11:13:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:31:33.885 11:13:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:31:33.885 11:13:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:31:33.885 11:13:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:31:33.885 11:13:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:34.144 [2024-07-25 11:13:41.016335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:34.144 [2024-07-25 11:13:41.016399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:34.144 [2024-07-25 11:13:41.016426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:31:34.144 [2024-07-25 11:13:41.016442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:34.144 [2024-07-25 11:13:41.019233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:34.144 [2024-07-25 11:13:41.019264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:34.144 [2024-07-25 11:13:41.019355] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:34.144 [2024-07-25 11:13:41.019433] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:34.144 [2024-07-25 11:13:41.019646] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:34.144 [2024-07-25 11:13:41.019664] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:34.144 [2024-07-25 11:13:41.019691] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:31:34.144 [2024-07-25 11:13:41.019767] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:34.144 [2024-07-25 11:13:41.019852] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:31:34.144 [2024-07-25 11:13:41.019866] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:34.144 [2024-07-25 11:13:41.020170] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:31:34.144 [2024-07-25 11:13:41.020387] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:31:34.144 [2024-07-25 11:13:41.020406] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:31:34.144 [2024-07-25 11:13:41.020646] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:34.144 pt1 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:34.144 "name": "raid_bdev1", 00:31:34.144 "uuid": "6fb9b41d-7927-4e78-ad37-a708d4cb6bf1", 00:31:34.144 "strip_size_kb": 0, 00:31:34.144 "state": "online", 00:31:34.144 "raid_level": "raid1", 00:31:34.144 "superblock": true, 00:31:34.144 "num_base_bdevs": 2, 00:31:34.144 "num_base_bdevs_discovered": 1, 00:31:34.144 "num_base_bdevs_operational": 1, 00:31:34.144 "base_bdevs_list": [ 00:31:34.144 { 00:31:34.144 "name": null, 00:31:34.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.144 "is_configured": false, 00:31:34.144 "data_offset": 256, 00:31:34.144 "data_size": 7936 00:31:34.144 }, 00:31:34.144 { 00:31:34.144 "name": "pt2", 00:31:34.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:34.144 "is_configured": true, 00:31:34.144 "data_offset": 256, 00:31:34.144 "data_size": 7936 00:31:34.144 } 00:31:34.144 ] 00:31:34.144 }' 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:34.144 11:13:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:34.713 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:31:34.713 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:34.972 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:31:34.972 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:34.972 11:13:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:31:35.231 [2024-07-25 11:13:42.115848] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:35.231 11:13:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@573 -- # '[' 6fb9b41d-7927-4e78-ad37-a708d4cb6bf1 '!=' 6fb9b41d-7927-4e78-ad37-a708d4cb6bf1 ']' 00:31:35.231 11:13:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@578 -- # killprocess 3739250 00:31:35.231 11:13:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 3739250 ']' 00:31:35.231 11:13:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 3739250 00:31:35.231 11:13:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:31:35.231 11:13:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:35.231 11:13:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3739250 00:31:35.231 11:13:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:35.231 11:13:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:35.231 11:13:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3739250' 00:31:35.231 killing process with pid 3739250 00:31:35.231 11:13:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 3739250 00:31:35.231 [2024-07-25 11:13:42.220144] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:35.231 [2024-07-25 11:13:42.220253] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:35.231 11:13:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 3739250 00:31:35.231 [2024-07-25 11:13:42.220310] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:35.231 [2024-07-25 11:13:42.220329] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:31:35.490 [2024-07-25 11:13:42.419261] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:37.397 11:13:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@580 -- # return 0 00:31:37.397 00:31:37.397 real 0m15.959s 00:31:37.397 user 0m27.092s 00:31:37.397 sys 0m2.862s 00:31:37.397 11:13:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:37.397 11:13:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:37.397 ************************************ 00:31:37.397 END TEST raid_superblock_test_4k 00:31:37.397 ************************************ 00:31:37.397 11:13:44 bdev_raid -- bdev/bdev_raid.sh@980 -- # '[' true = true ']' 00:31:37.397 11:13:44 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:31:37.397 11:13:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:31:37.397 11:13:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:37.397 11:13:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:37.397 ************************************ 00:31:37.397 START TEST raid_rebuild_test_sb_4k 00:31:37.397 ************************************ 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # local verify=true 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # local strip_size 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # local create_arg 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@594 -- # local data_offset 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # raid_pid=3742209 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # waitforlisten 3742209 /var/tmp/spdk-raid.sock 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 3742209 ']' 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:37.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:37.397 11:13:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:37.397 [2024-07-25 11:13:44.328311] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:37.397 [2024-07-25 11:13:44.328433] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3742209 ] 00:31:37.397 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:37.397 Zero copy mechanism will not be used. 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:01.0 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:01.1 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:01.2 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:01.3 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:01.4 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:01.5 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:01.6 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:01.7 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:02.0 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:02.1 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:02.2 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:02.3 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:02.4 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:02.5 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:02.6 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3d:02.7 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3f:01.0 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3f:01.1 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3f:01.2 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3f:01.3 cannot be used 00:31:37.397 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.397 EAL: Requested device 0000:3f:01.4 cannot be used 00:31:37.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.398 EAL: Requested device 0000:3f:01.5 cannot be used 00:31:37.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.398 EAL: Requested device 0000:3f:01.6 cannot be used 00:31:37.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.398 EAL: Requested device 0000:3f:01.7 cannot be used 00:31:37.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.398 EAL: Requested device 0000:3f:02.0 cannot be used 00:31:37.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.398 EAL: Requested device 0000:3f:02.1 cannot be used 00:31:37.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.398 EAL: Requested device 0000:3f:02.2 cannot be used 00:31:37.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.398 EAL: Requested device 0000:3f:02.3 cannot be used 00:31:37.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.398 EAL: Requested device 0000:3f:02.4 cannot be used 00:31:37.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.398 EAL: Requested device 0000:3f:02.5 cannot be used 00:31:37.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.398 EAL: Requested device 0000:3f:02.6 cannot be used 00:31:37.398 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:31:37.398 EAL: Requested device 0000:3f:02.7 cannot be used 00:31:37.657 [2024-07-25 11:13:44.541227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.916 [2024-07-25 11:13:44.826608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.176 [2024-07-25 11:13:45.171474] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:38.176 [2024-07-25 11:13:45.171506] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:38.434 11:13:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:38.434 11:13:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:31:38.434 11:13:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:38.434 11:13:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:31:38.693 BaseBdev1_malloc 00:31:38.693 11:13:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:38.953 [2024-07-25 11:13:45.860393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:38.953 [2024-07-25 11:13:45.860460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:38.953 [2024-07-25 11:13:45.860490] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:31:38.953 [2024-07-25 11:13:45.860508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:38.953 [2024-07-25 11:13:45.863259] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:38.953 [2024-07-25 11:13:45.863298] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:38.953 BaseBdev1 00:31:38.953 11:13:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:31:38.953 11:13:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:31:39.212 BaseBdev2_malloc 00:31:39.212 11:13:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:39.472 [2024-07-25 11:13:46.369238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:39.472 [2024-07-25 11:13:46.369300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.472 [2024-07-25 11:13:46.369328] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:31:39.472 [2024-07-25 11:13:46.369352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.472 [2024-07-25 11:13:46.372102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.472 [2024-07-25 11:13:46.372148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:39.472 BaseBdev2 00:31:39.472 11:13:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:31:39.731 spare_malloc 00:31:39.731 11:13:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:39.990 spare_delay 00:31:39.990 11:13:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:39.990 [2024-07-25 11:13:47.105596] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:39.990 [2024-07-25 11:13:47.105656] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:39.990 [2024-07-25 11:13:47.105682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:31:39.990 [2024-07-25 11:13:47.105700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:39.990 [2024-07-25 11:13:47.108479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:39.991 [2024-07-25 11:13:47.108517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:40.249 spare 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:31:40.249 [2024-07-25 11:13:47.330251] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:40.249 [2024-07-25 11:13:47.332590] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:40.249 [2024-07-25 11:13:47.332810] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:40.249 [2024-07-25 11:13:47.332831] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:40.249 [2024-07-25 11:13:47.333215] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:31:40.249 [2024-07-25 11:13:47.333478] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:40.249 [2024-07-25 11:13:47.333494] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:40.249 [2024-07-25 11:13:47.333733] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.249 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.508 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:40.508 "name": "raid_bdev1", 00:31:40.508 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:40.508 "strip_size_kb": 0, 00:31:40.508 "state": "online", 00:31:40.508 "raid_level": "raid1", 00:31:40.508 "superblock": true, 00:31:40.508 "num_base_bdevs": 2, 00:31:40.508 "num_base_bdevs_discovered": 2, 00:31:40.508 "num_base_bdevs_operational": 2, 00:31:40.508 "base_bdevs_list": [ 00:31:40.508 { 00:31:40.508 "name": "BaseBdev1", 00:31:40.508 "uuid": "b2539969-be61-5086-8792-4ae74ae1227b", 00:31:40.508 "is_configured": true, 00:31:40.508 "data_offset": 256, 00:31:40.508 "data_size": 7936 00:31:40.508 }, 00:31:40.508 { 00:31:40.508 "name": "BaseBdev2", 00:31:40.508 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:40.508 "is_configured": true, 00:31:40.508 "data_offset": 256, 00:31:40.508 "data_size": 7936 00:31:40.508 } 00:31:40.508 ] 00:31:40.508 }' 00:31:40.508 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:40.508 11:13:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:41.075 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:41.075 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:31:41.334 [2024-07-25 11:13:48.333251] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:41.334 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:31:41.334 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:41.334 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.592 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:31:41.592 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:31:41.592 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:31:41.593 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:31:41.593 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:41.593 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:41.593 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:41.593 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:41.593 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:41.593 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:41.593 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:31:41.593 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:41.593 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:41.593 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:41.851 [2024-07-25 11:13:48.794203] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:31:41.851 /dev/nbd0 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:41.851 1+0 records in 00:31:41.851 1+0 records out 00:31:41.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185823 s, 22.0 MB/s 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:31:41.851 11:13:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:31:42.785 7936+0 records in 00:31:42.785 7936+0 records out 00:31:42.785 32505856 bytes (33 MB, 31 MiB) copied, 0.783054 s, 41.5 MB/s 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:42.785 [2024-07-25 11:13:49.887487] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:31:42.785 11:13:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:43.044 [2024-07-25 11:13:50.104218] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:43.044 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:43.044 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:43.044 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:43.044 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:43.044 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:43.044 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:43.044 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:43.044 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:43.044 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:43.044 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:43.044 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.044 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.303 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:43.303 "name": "raid_bdev1", 00:31:43.303 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:43.303 "strip_size_kb": 0, 00:31:43.303 "state": "online", 00:31:43.303 "raid_level": "raid1", 00:31:43.303 "superblock": true, 00:31:43.303 "num_base_bdevs": 2, 00:31:43.303 "num_base_bdevs_discovered": 1, 00:31:43.303 "num_base_bdevs_operational": 1, 00:31:43.303 "base_bdevs_list": [ 00:31:43.303 { 00:31:43.303 "name": null, 00:31:43.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:43.303 "is_configured": false, 00:31:43.303 "data_offset": 256, 00:31:43.303 "data_size": 7936 00:31:43.303 }, 00:31:43.303 { 00:31:43.303 "name": "BaseBdev2", 00:31:43.303 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:43.303 "is_configured": true, 00:31:43.303 "data_offset": 256, 00:31:43.303 "data_size": 7936 00:31:43.303 } 00:31:43.303 ] 00:31:43.303 }' 00:31:43.303 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:43.303 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:43.870 11:13:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:44.129 [2024-07-25 11:13:51.078834] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:44.129 [2024-07-25 11:13:51.105380] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a4410 00:31:44.129 [2024-07-25 11:13:51.107695] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:44.129 11:13:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:45.102 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:45.102 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:45.102 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:45.102 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:45.102 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:45.102 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:45.102 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.370 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:45.370 "name": "raid_bdev1", 00:31:45.370 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:45.370 "strip_size_kb": 0, 00:31:45.370 "state": "online", 00:31:45.370 "raid_level": "raid1", 00:31:45.370 "superblock": true, 00:31:45.370 "num_base_bdevs": 2, 00:31:45.370 "num_base_bdevs_discovered": 2, 00:31:45.370 "num_base_bdevs_operational": 2, 00:31:45.370 "process": { 00:31:45.370 "type": "rebuild", 00:31:45.370 "target": "spare", 00:31:45.370 "progress": { 00:31:45.370 "blocks": 3072, 00:31:45.370 "percent": 38 00:31:45.370 } 00:31:45.370 }, 00:31:45.370 "base_bdevs_list": [ 00:31:45.370 { 00:31:45.370 "name": "spare", 00:31:45.370 "uuid": "3e16ae89-0b10-5589-9309-f876cf2f4cf9", 00:31:45.370 "is_configured": true, 00:31:45.370 "data_offset": 256, 00:31:45.370 "data_size": 7936 00:31:45.370 }, 00:31:45.370 { 00:31:45.370 "name": "BaseBdev2", 00:31:45.370 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:45.370 "is_configured": true, 00:31:45.370 "data_offset": 256, 00:31:45.370 "data_size": 7936 00:31:45.370 } 00:31:45.370 ] 00:31:45.370 }' 00:31:45.370 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:45.370 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:45.370 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:45.370 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:45.370 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:45.628 [2024-07-25 11:13:52.628613] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:45.628 [2024-07-25 11:13:52.720837] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:45.628 [2024-07-25 11:13:52.720898] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:45.628 [2024-07-25 11:13:52.720919] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:45.628 [2024-07-25 11:13:52.720942] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:45.886 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:45.886 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:45.886 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:45.886 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:45.886 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:45.886 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:45.886 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:45.886 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:45.886 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:45.886 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:45.886 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.886 11:13:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.145 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:46.145 "name": "raid_bdev1", 00:31:46.145 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:46.145 "strip_size_kb": 0, 00:31:46.145 "state": "online", 00:31:46.145 "raid_level": "raid1", 00:31:46.145 "superblock": true, 00:31:46.145 "num_base_bdevs": 2, 00:31:46.145 "num_base_bdevs_discovered": 1, 00:31:46.145 "num_base_bdevs_operational": 1, 00:31:46.145 "base_bdevs_list": [ 00:31:46.145 { 00:31:46.145 "name": null, 00:31:46.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:46.145 "is_configured": false, 00:31:46.145 "data_offset": 256, 00:31:46.145 "data_size": 7936 00:31:46.145 }, 00:31:46.145 { 00:31:46.145 "name": "BaseBdev2", 00:31:46.145 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:46.145 "is_configured": true, 00:31:46.145 "data_offset": 256, 00:31:46.145 "data_size": 7936 00:31:46.145 } 00:31:46.145 ] 00:31:46.145 }' 00:31:46.145 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:46.145 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:46.711 "name": "raid_bdev1", 00:31:46.711 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:46.711 "strip_size_kb": 0, 00:31:46.711 "state": "online", 00:31:46.711 "raid_level": "raid1", 00:31:46.711 "superblock": true, 00:31:46.711 "num_base_bdevs": 2, 00:31:46.711 "num_base_bdevs_discovered": 1, 00:31:46.711 "num_base_bdevs_operational": 1, 00:31:46.711 "base_bdevs_list": [ 00:31:46.711 { 00:31:46.711 "name": null, 00:31:46.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:46.711 "is_configured": false, 00:31:46.711 "data_offset": 256, 00:31:46.711 "data_size": 7936 00:31:46.711 }, 00:31:46.711 { 00:31:46.711 "name": "BaseBdev2", 00:31:46.711 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:46.711 "is_configured": true, 00:31:46.711 "data_offset": 256, 00:31:46.711 "data_size": 7936 00:31:46.711 } 00:31:46.711 ] 00:31:46.711 }' 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:46.711 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:46.970 [2024-07-25 11:13:53.933420] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:46.970 [2024-07-25 11:13:53.959182] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a44e0 00:31:46.970 [2024-07-25 11:13:53.961472] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:46.970 11:13:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@678 -- # sleep 1 00:31:47.903 11:13:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:47.903 11:13:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:47.903 11:13:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:47.904 11:13:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:47.904 11:13:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:47.904 11:13:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:47.904 11:13:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:48.162 "name": "raid_bdev1", 00:31:48.162 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:48.162 "strip_size_kb": 0, 00:31:48.162 "state": "online", 00:31:48.162 "raid_level": "raid1", 00:31:48.162 "superblock": true, 00:31:48.162 "num_base_bdevs": 2, 00:31:48.162 "num_base_bdevs_discovered": 2, 00:31:48.162 "num_base_bdevs_operational": 2, 00:31:48.162 "process": { 00:31:48.162 "type": "rebuild", 00:31:48.162 "target": "spare", 00:31:48.162 "progress": { 00:31:48.162 "blocks": 3072, 00:31:48.162 "percent": 38 00:31:48.162 } 00:31:48.162 }, 00:31:48.162 "base_bdevs_list": [ 00:31:48.162 { 00:31:48.162 "name": "spare", 00:31:48.162 "uuid": "3e16ae89-0b10-5589-9309-f876cf2f4cf9", 00:31:48.162 "is_configured": true, 00:31:48.162 "data_offset": 256, 00:31:48.162 "data_size": 7936 00:31:48.162 }, 00:31:48.162 { 00:31:48.162 "name": "BaseBdev2", 00:31:48.162 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:48.162 "is_configured": true, 00:31:48.162 "data_offset": 256, 00:31:48.162 "data_size": 7936 00:31:48.162 } 00:31:48.162 ] 00:31:48.162 }' 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:31:48.162 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # local timeout=1119 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.162 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.729 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:48.729 "name": "raid_bdev1", 00:31:48.729 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:48.729 "strip_size_kb": 0, 00:31:48.729 "state": "online", 00:31:48.729 "raid_level": "raid1", 00:31:48.729 "superblock": true, 00:31:48.729 "num_base_bdevs": 2, 00:31:48.729 "num_base_bdevs_discovered": 2, 00:31:48.729 "num_base_bdevs_operational": 2, 00:31:48.729 "process": { 00:31:48.729 "type": "rebuild", 00:31:48.729 "target": "spare", 00:31:48.729 "progress": { 00:31:48.729 "blocks": 4352, 00:31:48.729 "percent": 54 00:31:48.729 } 00:31:48.729 }, 00:31:48.729 "base_bdevs_list": [ 00:31:48.729 { 00:31:48.729 "name": "spare", 00:31:48.729 "uuid": "3e16ae89-0b10-5589-9309-f876cf2f4cf9", 00:31:48.729 "is_configured": true, 00:31:48.729 "data_offset": 256, 00:31:48.729 "data_size": 7936 00:31:48.729 }, 00:31:48.729 { 00:31:48.729 "name": "BaseBdev2", 00:31:48.729 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:48.729 "is_configured": true, 00:31:48.729 "data_offset": 256, 00:31:48.729 "data_size": 7936 00:31:48.729 } 00:31:48.729 ] 00:31:48.729 }' 00:31:48.729 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:48.729 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:48.729 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:48.988 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:48.988 11:13:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:49.922 11:13:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:49.922 11:13:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:49.922 11:13:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:49.922 11:13:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:49.922 11:13:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:49.922 11:13:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:49.922 11:13:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.923 11:13:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.181 [2024-07-25 11:13:57.086576] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:50.181 [2024-07-25 11:13:57.086652] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:50.181 [2024-07-25 11:13:57.086753] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:50.181 11:13:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:50.181 "name": "raid_bdev1", 00:31:50.181 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:50.181 "strip_size_kb": 0, 00:31:50.181 "state": "online", 00:31:50.181 "raid_level": "raid1", 00:31:50.181 "superblock": true, 00:31:50.181 "num_base_bdevs": 2, 00:31:50.181 "num_base_bdevs_discovered": 2, 00:31:50.181 "num_base_bdevs_operational": 2, 00:31:50.181 "process": { 00:31:50.181 "type": "rebuild", 00:31:50.181 "target": "spare", 00:31:50.181 "progress": { 00:31:50.181 "blocks": 7680, 00:31:50.181 "percent": 96 00:31:50.181 } 00:31:50.181 }, 00:31:50.181 "base_bdevs_list": [ 00:31:50.181 { 00:31:50.181 "name": "spare", 00:31:50.181 "uuid": "3e16ae89-0b10-5589-9309-f876cf2f4cf9", 00:31:50.181 "is_configured": true, 00:31:50.181 "data_offset": 256, 00:31:50.181 "data_size": 7936 00:31:50.181 }, 00:31:50.181 { 00:31:50.181 "name": "BaseBdev2", 00:31:50.181 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:50.181 "is_configured": true, 00:31:50.181 "data_offset": 256, 00:31:50.181 "data_size": 7936 00:31:50.181 } 00:31:50.181 ] 00:31:50.181 }' 00:31:50.181 11:13:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:50.181 11:13:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:50.181 11:13:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:50.181 11:13:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:50.181 11:13:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@726 -- # sleep 1 00:31:51.115 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:31:51.115 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:51.115 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:51.115 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:51.115 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:51.115 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:51.115 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.115 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:51.373 "name": "raid_bdev1", 00:31:51.373 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:51.373 "strip_size_kb": 0, 00:31:51.373 "state": "online", 00:31:51.373 "raid_level": "raid1", 00:31:51.373 "superblock": true, 00:31:51.373 "num_base_bdevs": 2, 00:31:51.373 "num_base_bdevs_discovered": 2, 00:31:51.373 "num_base_bdevs_operational": 2, 00:31:51.373 "base_bdevs_list": [ 00:31:51.373 { 00:31:51.373 "name": "spare", 00:31:51.373 "uuid": "3e16ae89-0b10-5589-9309-f876cf2f4cf9", 00:31:51.373 "is_configured": true, 00:31:51.373 "data_offset": 256, 00:31:51.373 "data_size": 7936 00:31:51.373 }, 00:31:51.373 { 00:31:51.373 "name": "BaseBdev2", 00:31:51.373 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:51.373 "is_configured": true, 00:31:51.373 "data_offset": 256, 00:31:51.373 "data_size": 7936 00:31:51.373 } 00:31:51.373 ] 00:31:51.373 }' 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@724 -- # break 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.373 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:51.632 "name": "raid_bdev1", 00:31:51.632 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:51.632 "strip_size_kb": 0, 00:31:51.632 "state": "online", 00:31:51.632 "raid_level": "raid1", 00:31:51.632 "superblock": true, 00:31:51.632 "num_base_bdevs": 2, 00:31:51.632 "num_base_bdevs_discovered": 2, 00:31:51.632 "num_base_bdevs_operational": 2, 00:31:51.632 "base_bdevs_list": [ 00:31:51.632 { 00:31:51.632 "name": "spare", 00:31:51.632 "uuid": "3e16ae89-0b10-5589-9309-f876cf2f4cf9", 00:31:51.632 "is_configured": true, 00:31:51.632 "data_offset": 256, 00:31:51.632 "data_size": 7936 00:31:51.632 }, 00:31:51.632 { 00:31:51.632 "name": "BaseBdev2", 00:31:51.632 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:51.632 "is_configured": true, 00:31:51.632 "data_offset": 256, 00:31:51.632 "data_size": 7936 00:31:51.632 } 00:31:51.632 ] 00:31:51.632 }' 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.632 11:13:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.199 11:13:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:52.199 "name": "raid_bdev1", 00:31:52.199 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:52.199 "strip_size_kb": 0, 00:31:52.199 "state": "online", 00:31:52.199 "raid_level": "raid1", 00:31:52.199 "superblock": true, 00:31:52.199 "num_base_bdevs": 2, 00:31:52.199 "num_base_bdevs_discovered": 2, 00:31:52.199 "num_base_bdevs_operational": 2, 00:31:52.199 "base_bdevs_list": [ 00:31:52.199 { 00:31:52.199 "name": "spare", 00:31:52.199 "uuid": "3e16ae89-0b10-5589-9309-f876cf2f4cf9", 00:31:52.199 "is_configured": true, 00:31:52.199 "data_offset": 256, 00:31:52.199 "data_size": 7936 00:31:52.199 }, 00:31:52.199 { 00:31:52.199 "name": "BaseBdev2", 00:31:52.199 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:52.199 "is_configured": true, 00:31:52.199 "data_offset": 256, 00:31:52.199 "data_size": 7936 00:31:52.199 } 00:31:52.199 ] 00:31:52.199 }' 00:31:52.199 11:13:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:52.199 11:13:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:52.767 11:13:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:53.027 [2024-07-25 11:13:59.968325] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:53.027 [2024-07-25 11:13:59.968358] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:53.027 [2024-07-25 11:13:59.968444] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:53.027 [2024-07-25 11:13:59.968526] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:53.027 [2024-07-25 11:13:59.968542] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:53.027 11:13:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.027 11:13:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # jq length 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:53.286 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:53.544 /dev/nbd0 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:53.544 1+0 records in 00:31:53.544 1+0 records out 00:31:53.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026338 s, 15.6 MB/s 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:53.544 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:53.801 /dev/nbd1 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:53.801 1+0 records in 00:31:53.801 1+0 records out 00:31:53.801 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321105 s, 12.8 MB/s 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:53.801 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:54.060 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:54.060 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:54.060 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:54.060 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:54.060 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:31:54.060 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:54.060 11:14:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:54.318 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:54.318 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:54.318 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:54.318 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:54.318 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:54.318 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:54.318 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:31:54.318 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:31:54.318 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:54.318 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:54.576 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:54.576 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:54.576 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:54.576 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:54.576 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:54.576 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:54.576 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:31:54.576 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:31:54.576 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:31:54.576 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:54.834 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:54.834 [2024-07-25 11:14:01.932550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:54.834 [2024-07-25 11:14:01.932609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:54.834 [2024-07-25 11:14:01.932637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:31:54.834 [2024-07-25 11:14:01.932652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:54.834 [2024-07-25 11:14:01.935442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:54.834 [2024-07-25 11:14:01.935476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:54.835 [2024-07-25 11:14:01.935575] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:54.835 [2024-07-25 11:14:01.935647] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:54.835 [2024-07-25 11:14:01.935838] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:54.835 spare 00:31:54.835 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:55.093 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:55.093 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:55.093 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:55.093 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:55.093 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:55.093 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:55.093 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:55.093 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:55.093 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:55.093 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:55.093 11:14:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.093 [2024-07-25 11:14:02.036179] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:31:55.093 [2024-07-25 11:14:02.036211] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:55.093 [2024-07-25 11:14:02.036542] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9390 00:31:55.093 [2024-07-25 11:14:02.036824] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:31:55.093 [2024-07-25 11:14:02.036839] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:31:55.093 [2024-07-25 11:14:02.037061] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:55.093 11:14:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:55.093 "name": "raid_bdev1", 00:31:55.093 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:55.093 "strip_size_kb": 0, 00:31:55.093 "state": "online", 00:31:55.093 "raid_level": "raid1", 00:31:55.093 "superblock": true, 00:31:55.093 "num_base_bdevs": 2, 00:31:55.093 "num_base_bdevs_discovered": 2, 00:31:55.093 "num_base_bdevs_operational": 2, 00:31:55.093 "base_bdevs_list": [ 00:31:55.093 { 00:31:55.093 "name": "spare", 00:31:55.093 "uuid": "3e16ae89-0b10-5589-9309-f876cf2f4cf9", 00:31:55.093 "is_configured": true, 00:31:55.093 "data_offset": 256, 00:31:55.093 "data_size": 7936 00:31:55.093 }, 00:31:55.093 { 00:31:55.093 "name": "BaseBdev2", 00:31:55.093 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:55.093 "is_configured": true, 00:31:55.093 "data_offset": 256, 00:31:55.093 "data_size": 7936 00:31:55.093 } 00:31:55.093 ] 00:31:55.093 }' 00:31:55.093 11:14:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:55.093 11:14:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.027 11:14:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:56.027 11:14:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:56.027 11:14:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:56.027 11:14:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:56.027 11:14:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:56.027 11:14:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.027 11:14:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.027 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:56.027 "name": "raid_bdev1", 00:31:56.027 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:56.027 "strip_size_kb": 0, 00:31:56.027 "state": "online", 00:31:56.027 "raid_level": "raid1", 00:31:56.027 "superblock": true, 00:31:56.027 "num_base_bdevs": 2, 00:31:56.027 "num_base_bdevs_discovered": 2, 00:31:56.027 "num_base_bdevs_operational": 2, 00:31:56.027 "base_bdevs_list": [ 00:31:56.027 { 00:31:56.027 "name": "spare", 00:31:56.027 "uuid": "3e16ae89-0b10-5589-9309-f876cf2f4cf9", 00:31:56.027 "is_configured": true, 00:31:56.027 "data_offset": 256, 00:31:56.027 "data_size": 7936 00:31:56.027 }, 00:31:56.027 { 00:31:56.027 "name": "BaseBdev2", 00:31:56.027 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:56.027 "is_configured": true, 00:31:56.027 "data_offset": 256, 00:31:56.027 "data_size": 7936 00:31:56.027 } 00:31:56.027 ] 00:31:56.027 }' 00:31:56.027 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:56.027 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:56.027 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:56.027 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:56.027 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.027 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:56.285 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:31:56.285 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:56.542 [2024-07-25 11:14:03.565311] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:56.542 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:56.542 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:56.542 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:56.543 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:56.543 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:56.543 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:56.543 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:56.543 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:56.543 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:56.543 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:56.543 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.543 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.801 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:56.801 "name": "raid_bdev1", 00:31:56.801 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:56.801 "strip_size_kb": 0, 00:31:56.801 "state": "online", 00:31:56.801 "raid_level": "raid1", 00:31:56.801 "superblock": true, 00:31:56.801 "num_base_bdevs": 2, 00:31:56.801 "num_base_bdevs_discovered": 1, 00:31:56.801 "num_base_bdevs_operational": 1, 00:31:56.801 "base_bdevs_list": [ 00:31:56.801 { 00:31:56.801 "name": null, 00:31:56.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:56.801 "is_configured": false, 00:31:56.801 "data_offset": 256, 00:31:56.801 "data_size": 7936 00:31:56.801 }, 00:31:56.801 { 00:31:56.801 "name": "BaseBdev2", 00:31:56.801 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:56.801 "is_configured": true, 00:31:56.801 "data_offset": 256, 00:31:56.801 "data_size": 7936 00:31:56.801 } 00:31:56.801 ] 00:31:56.801 }' 00:31:56.801 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:56.801 11:14:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.367 11:14:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:57.367 [2024-07-25 11:14:04.467755] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:57.367 [2024-07-25 11:14:04.467962] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:57.367 [2024-07-25 11:14:04.467987] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:57.367 [2024-07-25 11:14:04.468025] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:57.624 [2024-07-25 11:14:04.490828] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9460 00:31:57.624 [2024-07-25 11:14:04.493123] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:57.624 11:14:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # sleep 1 00:31:58.579 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:58.579 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:58.579 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:58.579 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:58.579 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:58.579 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.579 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.579 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:58.579 "name": "raid_bdev1", 00:31:58.579 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:58.579 "strip_size_kb": 0, 00:31:58.579 "state": "online", 00:31:58.579 "raid_level": "raid1", 00:31:58.579 "superblock": true, 00:31:58.579 "num_base_bdevs": 2, 00:31:58.579 "num_base_bdevs_discovered": 2, 00:31:58.579 "num_base_bdevs_operational": 2, 00:31:58.579 "process": { 00:31:58.579 "type": "rebuild", 00:31:58.579 "target": "spare", 00:31:58.579 "progress": { 00:31:58.579 "blocks": 2816, 00:31:58.579 "percent": 35 00:31:58.579 } 00:31:58.579 }, 00:31:58.579 "base_bdevs_list": [ 00:31:58.579 { 00:31:58.579 "name": "spare", 00:31:58.579 "uuid": "3e16ae89-0b10-5589-9309-f876cf2f4cf9", 00:31:58.579 "is_configured": true, 00:31:58.579 "data_offset": 256, 00:31:58.579 "data_size": 7936 00:31:58.579 }, 00:31:58.579 { 00:31:58.579 "name": "BaseBdev2", 00:31:58.579 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:58.579 "is_configured": true, 00:31:58.579 "data_offset": 256, 00:31:58.579 "data_size": 7936 00:31:58.579 } 00:31:58.579 ] 00:31:58.579 }' 00:31:58.579 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:58.853 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:58.853 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:58.853 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:58.853 11:14:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:58.853 [2024-07-25 11:14:05.950269] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:59.109 [2024-07-25 11:14:06.005312] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:59.109 [2024-07-25 11:14:06.005377] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:59.109 [2024-07-25 11:14:06.005398] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:59.109 [2024-07-25 11:14:06.005413] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:59.109 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:59.109 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:59.109 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:59.109 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:59.109 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:59.109 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:59.109 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:59.109 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:59.109 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:59.109 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:59.109 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.109 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.365 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:59.365 "name": "raid_bdev1", 00:31:59.365 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:31:59.365 "strip_size_kb": 0, 00:31:59.365 "state": "online", 00:31:59.365 "raid_level": "raid1", 00:31:59.365 "superblock": true, 00:31:59.365 "num_base_bdevs": 2, 00:31:59.365 "num_base_bdevs_discovered": 1, 00:31:59.365 "num_base_bdevs_operational": 1, 00:31:59.365 "base_bdevs_list": [ 00:31:59.365 { 00:31:59.365 "name": null, 00:31:59.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.365 "is_configured": false, 00:31:59.365 "data_offset": 256, 00:31:59.365 "data_size": 7936 00:31:59.365 }, 00:31:59.365 { 00:31:59.365 "name": "BaseBdev2", 00:31:59.365 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:31:59.365 "is_configured": true, 00:31:59.365 "data_offset": 256, 00:31:59.365 "data_size": 7936 00:31:59.365 } 00:31:59.365 ] 00:31:59.365 }' 00:31:59.365 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:59.365 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:59.930 11:14:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:59.930 [2024-07-25 11:14:06.993756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:59.930 [2024-07-25 11:14:06.993824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:59.930 [2024-07-25 11:14:06.993851] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:31:59.930 [2024-07-25 11:14:06.993869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:59.930 [2024-07-25 11:14:06.994472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:59.930 [2024-07-25 11:14:06.994506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:59.930 [2024-07-25 11:14:06.994610] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:59.930 [2024-07-25 11:14:06.994630] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:59.930 [2024-07-25 11:14:06.994646] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:59.930 [2024-07-25 11:14:06.994675] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:59.930 [2024-07-25 11:14:07.018929] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9530 00:31:59.930 spare 00:31:59.930 [2024-07-25 11:14:07.021235] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:59.930 11:14:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # sleep 1 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:01.303 "name": "raid_bdev1", 00:32:01.303 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:32:01.303 "strip_size_kb": 0, 00:32:01.303 "state": "online", 00:32:01.303 "raid_level": "raid1", 00:32:01.303 "superblock": true, 00:32:01.303 "num_base_bdevs": 2, 00:32:01.303 "num_base_bdevs_discovered": 2, 00:32:01.303 "num_base_bdevs_operational": 2, 00:32:01.303 "process": { 00:32:01.303 "type": "rebuild", 00:32:01.303 "target": "spare", 00:32:01.303 "progress": { 00:32:01.303 "blocks": 3072, 00:32:01.303 "percent": 38 00:32:01.303 } 00:32:01.303 }, 00:32:01.303 "base_bdevs_list": [ 00:32:01.303 { 00:32:01.303 "name": "spare", 00:32:01.303 "uuid": "3e16ae89-0b10-5589-9309-f876cf2f4cf9", 00:32:01.303 "is_configured": true, 00:32:01.303 "data_offset": 256, 00:32:01.303 "data_size": 7936 00:32:01.303 }, 00:32:01.303 { 00:32:01.303 "name": "BaseBdev2", 00:32:01.303 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:32:01.303 "is_configured": true, 00:32:01.303 "data_offset": 256, 00:32:01.303 "data_size": 7936 00:32:01.303 } 00:32:01.303 ] 00:32:01.303 }' 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:01.303 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:01.563 [2024-07-25 11:14:08.566674] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:01.563 [2024-07-25 11:14:08.634203] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:01.563 [2024-07-25 11:14:08.634261] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:01.563 [2024-07-25 11:14:08.634285] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:01.563 [2024-07-25 11:14:08.634301] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:01.821 "name": "raid_bdev1", 00:32:01.821 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:32:01.821 "strip_size_kb": 0, 00:32:01.821 "state": "online", 00:32:01.821 "raid_level": "raid1", 00:32:01.821 "superblock": true, 00:32:01.821 "num_base_bdevs": 2, 00:32:01.821 "num_base_bdevs_discovered": 1, 00:32:01.821 "num_base_bdevs_operational": 1, 00:32:01.821 "base_bdevs_list": [ 00:32:01.821 { 00:32:01.821 "name": null, 00:32:01.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:01.821 "is_configured": false, 00:32:01.821 "data_offset": 256, 00:32:01.821 "data_size": 7936 00:32:01.821 }, 00:32:01.821 { 00:32:01.821 "name": "BaseBdev2", 00:32:01.821 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:32:01.821 "is_configured": true, 00:32:01.821 "data_offset": 256, 00:32:01.821 "data_size": 7936 00:32:01.821 } 00:32:01.821 ] 00:32:01.821 }' 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:01.821 11:14:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:02.388 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:02.388 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:02.388 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:02.388 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:02.388 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:02.388 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.388 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.645 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:02.645 "name": "raid_bdev1", 00:32:02.645 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:32:02.645 "strip_size_kb": 0, 00:32:02.645 "state": "online", 00:32:02.645 "raid_level": "raid1", 00:32:02.645 "superblock": true, 00:32:02.645 "num_base_bdevs": 2, 00:32:02.645 "num_base_bdevs_discovered": 1, 00:32:02.645 "num_base_bdevs_operational": 1, 00:32:02.645 "base_bdevs_list": [ 00:32:02.645 { 00:32:02.645 "name": null, 00:32:02.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:02.645 "is_configured": false, 00:32:02.645 "data_offset": 256, 00:32:02.645 "data_size": 7936 00:32:02.645 }, 00:32:02.645 { 00:32:02.645 "name": "BaseBdev2", 00:32:02.645 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:32:02.645 "is_configured": true, 00:32:02.645 "data_offset": 256, 00:32:02.645 "data_size": 7936 00:32:02.645 } 00:32:02.645 ] 00:32:02.645 }' 00:32:02.645 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:02.645 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:02.645 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:02.903 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:02.903 11:14:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:02.903 11:14:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:03.470 [2024-07-25 11:14:10.500549] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:03.470 [2024-07-25 11:14:10.500615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:03.470 [2024-07-25 11:14:10.500645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:32:03.470 [2024-07-25 11:14:10.500661] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:03.470 [2024-07-25 11:14:10.501251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:03.470 [2024-07-25 11:14:10.501279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:03.470 [2024-07-25 11:14:10.501380] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:03.470 [2024-07-25 11:14:10.501398] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:03.470 [2024-07-25 11:14:10.501414] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:03.470 BaseBdev1 00:32:03.470 11:14:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@789 -- # sleep 1 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:04.851 "name": "raid_bdev1", 00:32:04.851 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:32:04.851 "strip_size_kb": 0, 00:32:04.851 "state": "online", 00:32:04.851 "raid_level": "raid1", 00:32:04.851 "superblock": true, 00:32:04.851 "num_base_bdevs": 2, 00:32:04.851 "num_base_bdevs_discovered": 1, 00:32:04.851 "num_base_bdevs_operational": 1, 00:32:04.851 "base_bdevs_list": [ 00:32:04.851 { 00:32:04.851 "name": null, 00:32:04.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:04.851 "is_configured": false, 00:32:04.851 "data_offset": 256, 00:32:04.851 "data_size": 7936 00:32:04.851 }, 00:32:04.851 { 00:32:04.851 "name": "BaseBdev2", 00:32:04.851 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:32:04.851 "is_configured": true, 00:32:04.851 "data_offset": 256, 00:32:04.851 "data_size": 7936 00:32:04.851 } 00:32:04.851 ] 00:32:04.851 }' 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:04.851 11:14:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:05.418 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:05.418 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:05.418 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:05.418 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:05.418 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:05.418 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.418 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:05.678 "name": "raid_bdev1", 00:32:05.678 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:32:05.678 "strip_size_kb": 0, 00:32:05.678 "state": "online", 00:32:05.678 "raid_level": "raid1", 00:32:05.678 "superblock": true, 00:32:05.678 "num_base_bdevs": 2, 00:32:05.678 "num_base_bdevs_discovered": 1, 00:32:05.678 "num_base_bdevs_operational": 1, 00:32:05.678 "base_bdevs_list": [ 00:32:05.678 { 00:32:05.678 "name": null, 00:32:05.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:05.678 "is_configured": false, 00:32:05.678 "data_offset": 256, 00:32:05.678 "data_size": 7936 00:32:05.678 }, 00:32:05.678 { 00:32:05.678 "name": "BaseBdev2", 00:32:05.678 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:32:05.678 "is_configured": true, 00:32:05.678 "data_offset": 256, 00:32:05.678 "data_size": 7936 00:32:05.678 } 00:32:05.678 ] 00:32:05.678 }' 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:32:05.678 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:05.938 [2024-07-25 11:14:12.851135] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:05.938 [2024-07-25 11:14:12.851315] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:05.938 [2024-07-25 11:14:12.851335] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:05.938 request: 00:32:05.938 { 00:32:05.938 "base_bdev": "BaseBdev1", 00:32:05.938 "raid_bdev": "raid_bdev1", 00:32:05.938 "method": "bdev_raid_add_base_bdev", 00:32:05.938 "req_id": 1 00:32:05.938 } 00:32:05.938 Got JSON-RPC error response 00:32:05.938 response: 00:32:05.938 { 00:32:05.938 "code": -22, 00:32:05.938 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:05.938 } 00:32:05.938 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:32:05.938 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:05.938 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:05.938 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:05.938 11:14:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@793 -- # sleep 1 00:32:06.876 11:14:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:06.876 11:14:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:06.876 11:14:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:06.876 11:14:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:06.876 11:14:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:06.876 11:14:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:06.876 11:14:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:06.876 11:14:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:06.876 11:14:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:06.876 11:14:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:06.876 11:14:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:06.876 11:14:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.135 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:07.135 "name": "raid_bdev1", 00:32:07.135 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:32:07.135 "strip_size_kb": 0, 00:32:07.135 "state": "online", 00:32:07.135 "raid_level": "raid1", 00:32:07.135 "superblock": true, 00:32:07.135 "num_base_bdevs": 2, 00:32:07.135 "num_base_bdevs_discovered": 1, 00:32:07.135 "num_base_bdevs_operational": 1, 00:32:07.135 "base_bdevs_list": [ 00:32:07.135 { 00:32:07.135 "name": null, 00:32:07.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.135 "is_configured": false, 00:32:07.135 "data_offset": 256, 00:32:07.135 "data_size": 7936 00:32:07.135 }, 00:32:07.135 { 00:32:07.135 "name": "BaseBdev2", 00:32:07.135 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:32:07.135 "is_configured": true, 00:32:07.135 "data_offset": 256, 00:32:07.135 "data_size": 7936 00:32:07.135 } 00:32:07.135 ] 00:32:07.135 }' 00:32:07.135 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:07.135 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:07.704 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:07.704 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:07.704 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:07.704 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:07.704 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:07.704 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.704 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.964 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:07.964 "name": "raid_bdev1", 00:32:07.964 "uuid": "4ce083df-fee1-43d9-92b3-24c861d9ed79", 00:32:07.964 "strip_size_kb": 0, 00:32:07.964 "state": "online", 00:32:07.964 "raid_level": "raid1", 00:32:07.964 "superblock": true, 00:32:07.964 "num_base_bdevs": 2, 00:32:07.964 "num_base_bdevs_discovered": 1, 00:32:07.964 "num_base_bdevs_operational": 1, 00:32:07.964 "base_bdevs_list": [ 00:32:07.964 { 00:32:07.964 "name": null, 00:32:07.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.964 "is_configured": false, 00:32:07.964 "data_offset": 256, 00:32:07.964 "data_size": 7936 00:32:07.964 }, 00:32:07.964 { 00:32:07.964 "name": "BaseBdev2", 00:32:07.964 "uuid": "160c5d7f-43ef-59c7-b2f2-045a90f8d98b", 00:32:07.964 "is_configured": true, 00:32:07.964 "data_offset": 256, 00:32:07.964 "data_size": 7936 00:32:07.964 } 00:32:07.964 ] 00:32:07.964 }' 00:32:07.964 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:07.964 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:07.964 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:07.964 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:07.964 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@798 -- # killprocess 3742209 00:32:07.964 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 3742209 ']' 00:32:07.964 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 3742209 00:32:07.964 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:32:07.964 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:07.964 11:14:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3742209 00:32:07.964 11:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:07.964 11:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:07.964 11:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3742209' 00:32:07.964 killing process with pid 3742209 00:32:07.964 11:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 3742209 00:32:07.964 Received shutdown signal, test time was about 60.000000 seconds 00:32:07.964 00:32:07.964 Latency(us) 00:32:07.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.964 =================================================================================================================== 00:32:07.964 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:07.964 [2024-07-25 11:14:15.028959] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:07.964 11:14:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 3742209 00:32:07.964 [2024-07-25 11:14:15.029099] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:07.964 [2024-07-25 11:14:15.029169] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:07.964 [2024-07-25 11:14:15.029186] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:32:08.533 [2024-07-25 11:14:15.344586] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:10.435 11:14:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@800 -- # return 0 00:32:10.435 00:32:10.435 real 0m32.822s 00:32:10.435 user 0m49.343s 00:32:10.435 sys 0m4.906s 00:32:10.435 11:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:10.435 11:14:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:10.435 ************************************ 00:32:10.435 END TEST raid_rebuild_test_sb_4k 00:32:10.435 ************************************ 00:32:10.435 11:14:17 bdev_raid -- bdev/bdev_raid.sh@984 -- # base_malloc_params='-m 32' 00:32:10.435 11:14:17 bdev_raid -- bdev/bdev_raid.sh@985 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:32:10.435 11:14:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:32:10.435 11:14:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:10.435 11:14:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:10.435 ************************************ 00:32:10.435 START TEST raid_state_function_test_sb_md_separate 00:32:10.435 ************************************ 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=3748005 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3748005' 00:32:10.435 Process raid pid: 3748005 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 3748005 /var/tmp/spdk-raid.sock 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 3748005 ']' 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:10.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:10.435 11:14:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:10.435 [2024-07-25 11:14:17.236963] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:10.435 [2024-07-25 11:14:17.237085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:10.435 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.435 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:10.436 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:10.436 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:10.436 [2024-07-25 11:14:17.463199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.694 [2024-07-25 11:14:17.741448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.259 [2024-07-25 11:14:18.095571] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:11.259 [2024-07-25 11:14:18.095610] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:11.259 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:11.259 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:32:11.260 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:11.518 [2024-07-25 11:14:18.486436] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:11.518 [2024-07-25 11:14:18.486491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:11.518 [2024-07-25 11:14:18.486505] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:11.518 [2024-07-25 11:14:18.486522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:11.518 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:11.518 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:11.518 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:11.518 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:11.518 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:11.518 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:11.518 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:11.518 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:11.518 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:11.518 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:11.518 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:11.518 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:11.776 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:11.776 "name": "Existed_Raid", 00:32:11.776 "uuid": "d3164745-1ced-4246-bba1-4a71b06e6319", 00:32:11.776 "strip_size_kb": 0, 00:32:11.776 "state": "configuring", 00:32:11.776 "raid_level": "raid1", 00:32:11.776 "superblock": true, 00:32:11.776 "num_base_bdevs": 2, 00:32:11.776 "num_base_bdevs_discovered": 0, 00:32:11.776 "num_base_bdevs_operational": 2, 00:32:11.776 "base_bdevs_list": [ 00:32:11.776 { 00:32:11.776 "name": "BaseBdev1", 00:32:11.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:11.776 "is_configured": false, 00:32:11.776 "data_offset": 0, 00:32:11.776 "data_size": 0 00:32:11.776 }, 00:32:11.776 { 00:32:11.776 "name": "BaseBdev2", 00:32:11.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:11.776 "is_configured": false, 00:32:11.776 "data_offset": 0, 00:32:11.776 "data_size": 0 00:32:11.776 } 00:32:11.776 ] 00:32:11.776 }' 00:32:11.776 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:11.776 11:14:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:12.343 11:14:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:12.627 [2024-07-25 11:14:19.517049] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:12.627 [2024-07-25 11:14:19.517092] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:32:12.627 11:14:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:12.896 [2024-07-25 11:14:19.745712] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:12.896 [2024-07-25 11:14:19.745756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:12.896 [2024-07-25 11:14:19.745770] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:12.896 [2024-07-25 11:14:19.745786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:12.896 11:14:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:32:13.154 [2024-07-25 11:14:20.029256] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:13.154 BaseBdev1 00:32:13.154 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:13.154 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:32:13.154 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:13.154 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:32:13.154 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:13.154 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:13.154 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:13.413 [ 00:32:13.413 { 00:32:13.413 "name": "BaseBdev1", 00:32:13.413 "aliases": [ 00:32:13.413 "dd8c9eeb-4e78-4b17-a6fa-d6849a59e538" 00:32:13.413 ], 00:32:13.413 "product_name": "Malloc disk", 00:32:13.413 "block_size": 4096, 00:32:13.413 "num_blocks": 8192, 00:32:13.413 "uuid": "dd8c9eeb-4e78-4b17-a6fa-d6849a59e538", 00:32:13.413 "md_size": 32, 00:32:13.413 "md_interleave": false, 00:32:13.413 "dif_type": 0, 00:32:13.413 "assigned_rate_limits": { 00:32:13.413 "rw_ios_per_sec": 0, 00:32:13.413 "rw_mbytes_per_sec": 0, 00:32:13.413 "r_mbytes_per_sec": 0, 00:32:13.413 "w_mbytes_per_sec": 0 00:32:13.413 }, 00:32:13.413 "claimed": true, 00:32:13.413 "claim_type": "exclusive_write", 00:32:13.413 "zoned": false, 00:32:13.413 "supported_io_types": { 00:32:13.413 "read": true, 00:32:13.413 "write": true, 00:32:13.413 "unmap": true, 00:32:13.413 "flush": true, 00:32:13.413 "reset": true, 00:32:13.413 "nvme_admin": false, 00:32:13.413 "nvme_io": false, 00:32:13.413 "nvme_io_md": false, 00:32:13.413 "write_zeroes": true, 00:32:13.413 "zcopy": true, 00:32:13.413 "get_zone_info": false, 00:32:13.413 "zone_management": false, 00:32:13.413 "zone_append": false, 00:32:13.413 "compare": false, 00:32:13.413 "compare_and_write": false, 00:32:13.413 "abort": true, 00:32:13.413 "seek_hole": false, 00:32:13.413 "seek_data": false, 00:32:13.413 "copy": true, 00:32:13.413 "nvme_iov_md": false 00:32:13.413 }, 00:32:13.413 "memory_domains": [ 00:32:13.413 { 00:32:13.413 "dma_device_id": "system", 00:32:13.413 "dma_device_type": 1 00:32:13.413 }, 00:32:13.413 { 00:32:13.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:13.413 "dma_device_type": 2 00:32:13.413 } 00:32:13.413 ], 00:32:13.413 "driver_specific": {} 00:32:13.413 } 00:32:13.413 ] 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.413 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:13.672 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:13.672 "name": "Existed_Raid", 00:32:13.672 "uuid": "8e8a0517-4166-4cda-99bb-677116874de7", 00:32:13.672 "strip_size_kb": 0, 00:32:13.672 "state": "configuring", 00:32:13.672 "raid_level": "raid1", 00:32:13.672 "superblock": true, 00:32:13.672 "num_base_bdevs": 2, 00:32:13.672 "num_base_bdevs_discovered": 1, 00:32:13.672 "num_base_bdevs_operational": 2, 00:32:13.672 "base_bdevs_list": [ 00:32:13.672 { 00:32:13.672 "name": "BaseBdev1", 00:32:13.672 "uuid": "dd8c9eeb-4e78-4b17-a6fa-d6849a59e538", 00:32:13.672 "is_configured": true, 00:32:13.672 "data_offset": 256, 00:32:13.672 "data_size": 7936 00:32:13.672 }, 00:32:13.672 { 00:32:13.672 "name": "BaseBdev2", 00:32:13.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:13.672 "is_configured": false, 00:32:13.672 "data_offset": 0, 00:32:13.672 "data_size": 0 00:32:13.672 } 00:32:13.672 ] 00:32:13.672 }' 00:32:13.672 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:13.672 11:14:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:14.238 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:14.496 [2024-07-25 11:14:21.493319] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:14.496 [2024-07-25 11:14:21.493372] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:32:14.496 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:14.754 [2024-07-25 11:14:21.721946] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:14.754 [2024-07-25 11:14:21.724262] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:14.754 [2024-07-25 11:14:21.724305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:14.754 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:14.754 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:14.754 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:14.754 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:14.754 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:14.754 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:14.754 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:14.754 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:14.754 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:14.755 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:14.755 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:14.755 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:14.755 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.755 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:15.013 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:15.013 "name": "Existed_Raid", 00:32:15.013 "uuid": "29889529-aa63-40e8-ab6a-5eb2f1696a29", 00:32:15.013 "strip_size_kb": 0, 00:32:15.013 "state": "configuring", 00:32:15.013 "raid_level": "raid1", 00:32:15.013 "superblock": true, 00:32:15.013 "num_base_bdevs": 2, 00:32:15.013 "num_base_bdevs_discovered": 1, 00:32:15.013 "num_base_bdevs_operational": 2, 00:32:15.013 "base_bdevs_list": [ 00:32:15.013 { 00:32:15.013 "name": "BaseBdev1", 00:32:15.013 "uuid": "dd8c9eeb-4e78-4b17-a6fa-d6849a59e538", 00:32:15.013 "is_configured": true, 00:32:15.013 "data_offset": 256, 00:32:15.013 "data_size": 7936 00:32:15.013 }, 00:32:15.013 { 00:32:15.013 "name": "BaseBdev2", 00:32:15.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:15.013 "is_configured": false, 00:32:15.013 "data_offset": 0, 00:32:15.013 "data_size": 0 00:32:15.013 } 00:32:15.013 ] 00:32:15.013 }' 00:32:15.013 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:15.013 11:14:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:15.579 11:14:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:32:15.838 [2024-07-25 11:14:22.808299] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:15.838 [2024-07-25 11:14:22.808570] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:15.838 [2024-07-25 11:14:22.808591] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:15.838 [2024-07-25 11:14:22.808694] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:32:15.838 [2024-07-25 11:14:22.808899] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:15.838 [2024-07-25 11:14:22.808916] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:32:15.838 [2024-07-25 11:14:22.809059] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:15.838 BaseBdev2 00:32:15.838 11:14:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:15.838 11:14:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:32:15.838 11:14:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:32:15.838 11:14:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:32:15.838 11:14:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:32:15.838 11:14:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:32:15.838 11:14:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:16.096 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:16.354 [ 00:32:16.354 { 00:32:16.354 "name": "BaseBdev2", 00:32:16.354 "aliases": [ 00:32:16.354 "e9ecbff3-22e8-4120-87f4-aa4e263101a0" 00:32:16.354 ], 00:32:16.354 "product_name": "Malloc disk", 00:32:16.354 "block_size": 4096, 00:32:16.354 "num_blocks": 8192, 00:32:16.354 "uuid": "e9ecbff3-22e8-4120-87f4-aa4e263101a0", 00:32:16.354 "md_size": 32, 00:32:16.354 "md_interleave": false, 00:32:16.354 "dif_type": 0, 00:32:16.354 "assigned_rate_limits": { 00:32:16.354 "rw_ios_per_sec": 0, 00:32:16.354 "rw_mbytes_per_sec": 0, 00:32:16.354 "r_mbytes_per_sec": 0, 00:32:16.354 "w_mbytes_per_sec": 0 00:32:16.354 }, 00:32:16.354 "claimed": true, 00:32:16.354 "claim_type": "exclusive_write", 00:32:16.354 "zoned": false, 00:32:16.354 "supported_io_types": { 00:32:16.354 "read": true, 00:32:16.354 "write": true, 00:32:16.354 "unmap": true, 00:32:16.354 "flush": true, 00:32:16.354 "reset": true, 00:32:16.354 "nvme_admin": false, 00:32:16.354 "nvme_io": false, 00:32:16.354 "nvme_io_md": false, 00:32:16.354 "write_zeroes": true, 00:32:16.354 "zcopy": true, 00:32:16.354 "get_zone_info": false, 00:32:16.354 "zone_management": false, 00:32:16.354 "zone_append": false, 00:32:16.354 "compare": false, 00:32:16.354 "compare_and_write": false, 00:32:16.354 "abort": true, 00:32:16.354 "seek_hole": false, 00:32:16.354 "seek_data": false, 00:32:16.354 "copy": true, 00:32:16.354 "nvme_iov_md": false 00:32:16.354 }, 00:32:16.354 "memory_domains": [ 00:32:16.354 { 00:32:16.354 "dma_device_id": "system", 00:32:16.354 "dma_device_type": 1 00:32:16.354 }, 00:32:16.354 { 00:32:16.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:16.354 "dma_device_type": 2 00:32:16.354 } 00:32:16.354 ], 00:32:16.354 "driver_specific": {} 00:32:16.354 } 00:32:16.354 ] 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:16.354 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:16.613 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:16.613 "name": "Existed_Raid", 00:32:16.613 "uuid": "29889529-aa63-40e8-ab6a-5eb2f1696a29", 00:32:16.613 "strip_size_kb": 0, 00:32:16.613 "state": "online", 00:32:16.613 "raid_level": "raid1", 00:32:16.613 "superblock": true, 00:32:16.613 "num_base_bdevs": 2, 00:32:16.613 "num_base_bdevs_discovered": 2, 00:32:16.613 "num_base_bdevs_operational": 2, 00:32:16.613 "base_bdevs_list": [ 00:32:16.613 { 00:32:16.613 "name": "BaseBdev1", 00:32:16.613 "uuid": "dd8c9eeb-4e78-4b17-a6fa-d6849a59e538", 00:32:16.613 "is_configured": true, 00:32:16.613 "data_offset": 256, 00:32:16.613 "data_size": 7936 00:32:16.613 }, 00:32:16.613 { 00:32:16.613 "name": "BaseBdev2", 00:32:16.613 "uuid": "e9ecbff3-22e8-4120-87f4-aa4e263101a0", 00:32:16.613 "is_configured": true, 00:32:16.613 "data_offset": 256, 00:32:16.613 "data_size": 7936 00:32:16.613 } 00:32:16.613 ] 00:32:16.613 }' 00:32:16.613 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:16.613 11:14:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:17.178 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:17.178 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:17.178 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:17.178 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:17.178 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:17.178 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:32:17.178 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:17.178 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:17.178 [2024-07-25 11:14:24.284733] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:17.437 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:17.437 "name": "Existed_Raid", 00:32:17.437 "aliases": [ 00:32:17.437 "29889529-aa63-40e8-ab6a-5eb2f1696a29" 00:32:17.437 ], 00:32:17.437 "product_name": "Raid Volume", 00:32:17.437 "block_size": 4096, 00:32:17.437 "num_blocks": 7936, 00:32:17.437 "uuid": "29889529-aa63-40e8-ab6a-5eb2f1696a29", 00:32:17.437 "md_size": 32, 00:32:17.437 "md_interleave": false, 00:32:17.437 "dif_type": 0, 00:32:17.437 "assigned_rate_limits": { 00:32:17.437 "rw_ios_per_sec": 0, 00:32:17.437 "rw_mbytes_per_sec": 0, 00:32:17.437 "r_mbytes_per_sec": 0, 00:32:17.437 "w_mbytes_per_sec": 0 00:32:17.437 }, 00:32:17.437 "claimed": false, 00:32:17.437 "zoned": false, 00:32:17.437 "supported_io_types": { 00:32:17.437 "read": true, 00:32:17.437 "write": true, 00:32:17.437 "unmap": false, 00:32:17.437 "flush": false, 00:32:17.437 "reset": true, 00:32:17.437 "nvme_admin": false, 00:32:17.437 "nvme_io": false, 00:32:17.437 "nvme_io_md": false, 00:32:17.437 "write_zeroes": true, 00:32:17.437 "zcopy": false, 00:32:17.437 "get_zone_info": false, 00:32:17.437 "zone_management": false, 00:32:17.437 "zone_append": false, 00:32:17.437 "compare": false, 00:32:17.437 "compare_and_write": false, 00:32:17.437 "abort": false, 00:32:17.437 "seek_hole": false, 00:32:17.437 "seek_data": false, 00:32:17.437 "copy": false, 00:32:17.437 "nvme_iov_md": false 00:32:17.437 }, 00:32:17.437 "memory_domains": [ 00:32:17.437 { 00:32:17.437 "dma_device_id": "system", 00:32:17.437 "dma_device_type": 1 00:32:17.437 }, 00:32:17.437 { 00:32:17.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:17.437 "dma_device_type": 2 00:32:17.437 }, 00:32:17.437 { 00:32:17.437 "dma_device_id": "system", 00:32:17.437 "dma_device_type": 1 00:32:17.437 }, 00:32:17.437 { 00:32:17.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:17.437 "dma_device_type": 2 00:32:17.437 } 00:32:17.437 ], 00:32:17.437 "driver_specific": { 00:32:17.437 "raid": { 00:32:17.437 "uuid": "29889529-aa63-40e8-ab6a-5eb2f1696a29", 00:32:17.437 "strip_size_kb": 0, 00:32:17.437 "state": "online", 00:32:17.437 "raid_level": "raid1", 00:32:17.437 "superblock": true, 00:32:17.437 "num_base_bdevs": 2, 00:32:17.437 "num_base_bdevs_discovered": 2, 00:32:17.437 "num_base_bdevs_operational": 2, 00:32:17.437 "base_bdevs_list": [ 00:32:17.437 { 00:32:17.437 "name": "BaseBdev1", 00:32:17.437 "uuid": "dd8c9eeb-4e78-4b17-a6fa-d6849a59e538", 00:32:17.437 "is_configured": true, 00:32:17.437 "data_offset": 256, 00:32:17.437 "data_size": 7936 00:32:17.437 }, 00:32:17.437 { 00:32:17.437 "name": "BaseBdev2", 00:32:17.437 "uuid": "e9ecbff3-22e8-4120-87f4-aa4e263101a0", 00:32:17.437 "is_configured": true, 00:32:17.437 "data_offset": 256, 00:32:17.437 "data_size": 7936 00:32:17.437 } 00:32:17.437 ] 00:32:17.437 } 00:32:17.437 } 00:32:17.437 }' 00:32:17.437 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:17.437 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:17.437 BaseBdev2' 00:32:17.437 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:17.437 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:17.437 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:17.696 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:17.696 "name": "BaseBdev1", 00:32:17.696 "aliases": [ 00:32:17.696 "dd8c9eeb-4e78-4b17-a6fa-d6849a59e538" 00:32:17.696 ], 00:32:17.696 "product_name": "Malloc disk", 00:32:17.696 "block_size": 4096, 00:32:17.696 "num_blocks": 8192, 00:32:17.696 "uuid": "dd8c9eeb-4e78-4b17-a6fa-d6849a59e538", 00:32:17.696 "md_size": 32, 00:32:17.696 "md_interleave": false, 00:32:17.696 "dif_type": 0, 00:32:17.696 "assigned_rate_limits": { 00:32:17.696 "rw_ios_per_sec": 0, 00:32:17.696 "rw_mbytes_per_sec": 0, 00:32:17.696 "r_mbytes_per_sec": 0, 00:32:17.696 "w_mbytes_per_sec": 0 00:32:17.696 }, 00:32:17.696 "claimed": true, 00:32:17.696 "claim_type": "exclusive_write", 00:32:17.696 "zoned": false, 00:32:17.696 "supported_io_types": { 00:32:17.696 "read": true, 00:32:17.696 "write": true, 00:32:17.696 "unmap": true, 00:32:17.696 "flush": true, 00:32:17.696 "reset": true, 00:32:17.696 "nvme_admin": false, 00:32:17.696 "nvme_io": false, 00:32:17.696 "nvme_io_md": false, 00:32:17.696 "write_zeroes": true, 00:32:17.696 "zcopy": true, 00:32:17.696 "get_zone_info": false, 00:32:17.696 "zone_management": false, 00:32:17.696 "zone_append": false, 00:32:17.696 "compare": false, 00:32:17.696 "compare_and_write": false, 00:32:17.696 "abort": true, 00:32:17.696 "seek_hole": false, 00:32:17.696 "seek_data": false, 00:32:17.696 "copy": true, 00:32:17.696 "nvme_iov_md": false 00:32:17.696 }, 00:32:17.696 "memory_domains": [ 00:32:17.696 { 00:32:17.696 "dma_device_id": "system", 00:32:17.696 "dma_device_type": 1 00:32:17.696 }, 00:32:17.696 { 00:32:17.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:17.696 "dma_device_type": 2 00:32:17.696 } 00:32:17.696 ], 00:32:17.696 "driver_specific": {} 00:32:17.696 }' 00:32:17.696 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:17.696 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:17.696 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:17.696 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:17.696 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:17.696 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:17.696 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:17.696 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:17.954 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:17.954 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:17.954 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:17.954 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:17.954 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:17.954 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:17.954 11:14:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:18.213 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:18.213 "name": "BaseBdev2", 00:32:18.213 "aliases": [ 00:32:18.213 "e9ecbff3-22e8-4120-87f4-aa4e263101a0" 00:32:18.213 ], 00:32:18.213 "product_name": "Malloc disk", 00:32:18.213 "block_size": 4096, 00:32:18.213 "num_blocks": 8192, 00:32:18.213 "uuid": "e9ecbff3-22e8-4120-87f4-aa4e263101a0", 00:32:18.213 "md_size": 32, 00:32:18.213 "md_interleave": false, 00:32:18.213 "dif_type": 0, 00:32:18.213 "assigned_rate_limits": { 00:32:18.213 "rw_ios_per_sec": 0, 00:32:18.213 "rw_mbytes_per_sec": 0, 00:32:18.213 "r_mbytes_per_sec": 0, 00:32:18.213 "w_mbytes_per_sec": 0 00:32:18.213 }, 00:32:18.213 "claimed": true, 00:32:18.213 "claim_type": "exclusive_write", 00:32:18.213 "zoned": false, 00:32:18.213 "supported_io_types": { 00:32:18.213 "read": true, 00:32:18.213 "write": true, 00:32:18.213 "unmap": true, 00:32:18.213 "flush": true, 00:32:18.213 "reset": true, 00:32:18.213 "nvme_admin": false, 00:32:18.213 "nvme_io": false, 00:32:18.213 "nvme_io_md": false, 00:32:18.213 "write_zeroes": true, 00:32:18.213 "zcopy": true, 00:32:18.213 "get_zone_info": false, 00:32:18.213 "zone_management": false, 00:32:18.213 "zone_append": false, 00:32:18.213 "compare": false, 00:32:18.213 "compare_and_write": false, 00:32:18.213 "abort": true, 00:32:18.213 "seek_hole": false, 00:32:18.213 "seek_data": false, 00:32:18.213 "copy": true, 00:32:18.213 "nvme_iov_md": false 00:32:18.213 }, 00:32:18.213 "memory_domains": [ 00:32:18.213 { 00:32:18.213 "dma_device_id": "system", 00:32:18.213 "dma_device_type": 1 00:32:18.213 }, 00:32:18.213 { 00:32:18.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:18.213 "dma_device_type": 2 00:32:18.213 } 00:32:18.213 ], 00:32:18.213 "driver_specific": {} 00:32:18.213 }' 00:32:18.213 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:18.213 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:18.213 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:18.213 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:18.213 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:18.213 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:18.213 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:18.471 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:18.471 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:18.471 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:18.471 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:18.471 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:18.471 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:18.730 [2024-07-25 11:14:25.684240] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.730 11:14:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:18.988 11:14:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:18.988 "name": "Existed_Raid", 00:32:18.988 "uuid": "29889529-aa63-40e8-ab6a-5eb2f1696a29", 00:32:18.988 "strip_size_kb": 0, 00:32:18.988 "state": "online", 00:32:18.988 "raid_level": "raid1", 00:32:18.988 "superblock": true, 00:32:18.988 "num_base_bdevs": 2, 00:32:18.988 "num_base_bdevs_discovered": 1, 00:32:18.988 "num_base_bdevs_operational": 1, 00:32:18.988 "base_bdevs_list": [ 00:32:18.988 { 00:32:18.988 "name": null, 00:32:18.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.988 "is_configured": false, 00:32:18.988 "data_offset": 256, 00:32:18.988 "data_size": 7936 00:32:18.988 }, 00:32:18.988 { 00:32:18.988 "name": "BaseBdev2", 00:32:18.988 "uuid": "e9ecbff3-22e8-4120-87f4-aa4e263101a0", 00:32:18.988 "is_configured": true, 00:32:18.988 "data_offset": 256, 00:32:18.988 "data_size": 7936 00:32:18.988 } 00:32:18.988 ] 00:32:18.988 }' 00:32:18.988 11:14:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:18.988 11:14:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:19.555 11:14:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:19.555 11:14:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:19.555 11:14:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:19.555 11:14:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.813 11:14:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:19.813 11:14:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:19.813 11:14:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:20.071 [2024-07-25 11:14:27.102646] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:20.071 [2024-07-25 11:14:27.102763] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:20.330 [2024-07-25 11:14:27.249085] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:20.330 [2024-07-25 11:14:27.249135] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:20.330 [2024-07-25 11:14:27.249161] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:32:20.330 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:20.330 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:20.330 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:20.330 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 3748005 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 3748005 ']' 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 3748005 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3748005 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3748005' 00:32:20.588 killing process with pid 3748005 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 3748005 00:32:20.588 [2024-07-25 11:14:27.560184] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:20.588 11:14:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 3748005 00:32:20.588 [2024-07-25 11:14:27.584586] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:22.490 11:14:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:32:22.490 00:32:22.490 real 0m12.086s 00:32:22.490 user 0m19.674s 00:32:22.490 sys 0m2.091s 00:32:22.490 11:14:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:22.490 11:14:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:22.490 ************************************ 00:32:22.490 END TEST raid_state_function_test_sb_md_separate 00:32:22.490 ************************************ 00:32:22.490 11:14:29 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:32:22.490 11:14:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:22.490 11:14:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:22.490 11:14:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:22.490 ************************************ 00:32:22.490 START TEST raid_superblock_test_md_separate 00:32:22.490 ************************************ 00:32:22.490 11:14:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:32:22.490 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:32:22.490 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:32:22.490 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:32:22.490 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:32:22.490 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:32:22.490 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:32:22.490 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:32:22.490 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@414 -- # local strip_size 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@427 -- # raid_pid=3750278 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@428 -- # waitforlisten 3750278 /var/tmp/spdk-raid.sock 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 3750278 ']' 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:22.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:22.491 11:14:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:22.491 [2024-07-25 11:14:29.413172] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:22.491 [2024-07-25 11:14:29.413295] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750278 ] 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:22.491 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:22.491 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:22.749 [2024-07-25 11:14:29.640876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.008 [2024-07-25 11:14:29.899547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.266 [2024-07-25 11:14:30.221598] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:23.266 [2024-07-25 11:14:30.221633] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:23.524 11:14:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:23.524 11:14:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:32:23.524 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:32:23.524 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:32:23.524 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:32:23.524 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:32:23.524 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:23.524 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:23.524 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:32:23.524 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:23.524 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:32:23.781 malloc1 00:32:23.781 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:24.038 [2024-07-25 11:14:30.909613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:24.038 [2024-07-25 11:14:30.909680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:24.038 [2024-07-25 11:14:30.909712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:32:24.038 [2024-07-25 11:14:30.909728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:24.038 [2024-07-25 11:14:30.912231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:24.038 [2024-07-25 11:14:30.912264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:24.038 pt1 00:32:24.038 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:32:24.038 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:32:24.038 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:32:24.038 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:32:24.038 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:24.039 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:24.039 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:32:24.039 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:24.039 11:14:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:32:24.296 malloc2 00:32:24.296 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:24.296 [2024-07-25 11:14:31.415247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:24.554 [2024-07-25 11:14:31.415309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:24.554 [2024-07-25 11:14:31.415341] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:32:24.554 [2024-07-25 11:14:31.415357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:24.554 [2024-07-25 11:14:31.417830] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:24.554 [2024-07-25 11:14:31.417862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:24.554 pt2 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:32:24.554 [2024-07-25 11:14:31.643866] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:24.554 [2024-07-25 11:14:31.646226] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:24.554 [2024-07-25 11:14:31.646449] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:24.554 [2024-07-25 11:14:31.646467] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:24.554 [2024-07-25 11:14:31.646584] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:32:24.554 [2024-07-25 11:14:31.646782] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:24.554 [2024-07-25 11:14:31.646801] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:24.554 [2024-07-25 11:14:31.646962] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.554 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.811 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:24.811 "name": "raid_bdev1", 00:32:24.811 "uuid": "70097ef2-2d58-4bac-b1bd-f5fba8196f8d", 00:32:24.811 "strip_size_kb": 0, 00:32:24.811 "state": "online", 00:32:24.811 "raid_level": "raid1", 00:32:24.811 "superblock": true, 00:32:24.811 "num_base_bdevs": 2, 00:32:24.811 "num_base_bdevs_discovered": 2, 00:32:24.811 "num_base_bdevs_operational": 2, 00:32:24.811 "base_bdevs_list": [ 00:32:24.811 { 00:32:24.811 "name": "pt1", 00:32:24.811 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:24.811 "is_configured": true, 00:32:24.811 "data_offset": 256, 00:32:24.811 "data_size": 7936 00:32:24.811 }, 00:32:24.811 { 00:32:24.811 "name": "pt2", 00:32:24.811 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:24.811 "is_configured": true, 00:32:24.811 "data_offset": 256, 00:32:24.811 "data_size": 7936 00:32:24.811 } 00:32:24.811 ] 00:32:24.811 }' 00:32:24.811 11:14:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:24.811 11:14:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:25.377 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:32:25.377 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:25.377 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:25.377 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:25.377 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:25.377 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:32:25.377 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:25.377 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:25.636 [2024-07-25 11:14:32.666928] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:25.636 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:25.636 "name": "raid_bdev1", 00:32:25.636 "aliases": [ 00:32:25.636 "70097ef2-2d58-4bac-b1bd-f5fba8196f8d" 00:32:25.636 ], 00:32:25.636 "product_name": "Raid Volume", 00:32:25.636 "block_size": 4096, 00:32:25.636 "num_blocks": 7936, 00:32:25.636 "uuid": "70097ef2-2d58-4bac-b1bd-f5fba8196f8d", 00:32:25.636 "md_size": 32, 00:32:25.636 "md_interleave": false, 00:32:25.636 "dif_type": 0, 00:32:25.636 "assigned_rate_limits": { 00:32:25.636 "rw_ios_per_sec": 0, 00:32:25.636 "rw_mbytes_per_sec": 0, 00:32:25.636 "r_mbytes_per_sec": 0, 00:32:25.636 "w_mbytes_per_sec": 0 00:32:25.636 }, 00:32:25.636 "claimed": false, 00:32:25.636 "zoned": false, 00:32:25.636 "supported_io_types": { 00:32:25.636 "read": true, 00:32:25.636 "write": true, 00:32:25.636 "unmap": false, 00:32:25.636 "flush": false, 00:32:25.636 "reset": true, 00:32:25.636 "nvme_admin": false, 00:32:25.636 "nvme_io": false, 00:32:25.636 "nvme_io_md": false, 00:32:25.636 "write_zeroes": true, 00:32:25.636 "zcopy": false, 00:32:25.636 "get_zone_info": false, 00:32:25.636 "zone_management": false, 00:32:25.636 "zone_append": false, 00:32:25.636 "compare": false, 00:32:25.636 "compare_and_write": false, 00:32:25.636 "abort": false, 00:32:25.636 "seek_hole": false, 00:32:25.636 "seek_data": false, 00:32:25.636 "copy": false, 00:32:25.636 "nvme_iov_md": false 00:32:25.636 }, 00:32:25.636 "memory_domains": [ 00:32:25.636 { 00:32:25.636 "dma_device_id": "system", 00:32:25.636 "dma_device_type": 1 00:32:25.636 }, 00:32:25.636 { 00:32:25.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:25.636 "dma_device_type": 2 00:32:25.636 }, 00:32:25.636 { 00:32:25.636 "dma_device_id": "system", 00:32:25.636 "dma_device_type": 1 00:32:25.636 }, 00:32:25.636 { 00:32:25.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:25.636 "dma_device_type": 2 00:32:25.636 } 00:32:25.636 ], 00:32:25.636 "driver_specific": { 00:32:25.636 "raid": { 00:32:25.636 "uuid": "70097ef2-2d58-4bac-b1bd-f5fba8196f8d", 00:32:25.636 "strip_size_kb": 0, 00:32:25.636 "state": "online", 00:32:25.636 "raid_level": "raid1", 00:32:25.636 "superblock": true, 00:32:25.636 "num_base_bdevs": 2, 00:32:25.636 "num_base_bdevs_discovered": 2, 00:32:25.636 "num_base_bdevs_operational": 2, 00:32:25.636 "base_bdevs_list": [ 00:32:25.636 { 00:32:25.636 "name": "pt1", 00:32:25.636 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:25.636 "is_configured": true, 00:32:25.636 "data_offset": 256, 00:32:25.636 "data_size": 7936 00:32:25.636 }, 00:32:25.636 { 00:32:25.636 "name": "pt2", 00:32:25.636 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:25.636 "is_configured": true, 00:32:25.636 "data_offset": 256, 00:32:25.636 "data_size": 7936 00:32:25.636 } 00:32:25.636 ] 00:32:25.636 } 00:32:25.636 } 00:32:25.636 }' 00:32:25.636 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:25.636 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:25.636 pt2' 00:32:25.636 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:25.636 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:25.636 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:25.896 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:25.896 "name": "pt1", 00:32:25.896 "aliases": [ 00:32:25.896 "00000000-0000-0000-0000-000000000001" 00:32:25.896 ], 00:32:25.896 "product_name": "passthru", 00:32:25.896 "block_size": 4096, 00:32:25.896 "num_blocks": 8192, 00:32:25.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:25.896 "md_size": 32, 00:32:25.896 "md_interleave": false, 00:32:25.896 "dif_type": 0, 00:32:25.896 "assigned_rate_limits": { 00:32:25.896 "rw_ios_per_sec": 0, 00:32:25.896 "rw_mbytes_per_sec": 0, 00:32:25.896 "r_mbytes_per_sec": 0, 00:32:25.896 "w_mbytes_per_sec": 0 00:32:25.896 }, 00:32:25.896 "claimed": true, 00:32:25.896 "claim_type": "exclusive_write", 00:32:25.896 "zoned": false, 00:32:25.896 "supported_io_types": { 00:32:25.896 "read": true, 00:32:25.896 "write": true, 00:32:25.896 "unmap": true, 00:32:25.896 "flush": true, 00:32:25.896 "reset": true, 00:32:25.896 "nvme_admin": false, 00:32:25.896 "nvme_io": false, 00:32:25.896 "nvme_io_md": false, 00:32:25.896 "write_zeroes": true, 00:32:25.896 "zcopy": true, 00:32:25.896 "get_zone_info": false, 00:32:25.896 "zone_management": false, 00:32:25.896 "zone_append": false, 00:32:25.896 "compare": false, 00:32:25.896 "compare_and_write": false, 00:32:25.896 "abort": true, 00:32:25.896 "seek_hole": false, 00:32:25.896 "seek_data": false, 00:32:25.896 "copy": true, 00:32:25.896 "nvme_iov_md": false 00:32:25.896 }, 00:32:25.896 "memory_domains": [ 00:32:25.896 { 00:32:25.896 "dma_device_id": "system", 00:32:25.896 "dma_device_type": 1 00:32:25.896 }, 00:32:25.896 { 00:32:25.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:25.897 "dma_device_type": 2 00:32:25.897 } 00:32:25.897 ], 00:32:25.897 "driver_specific": { 00:32:25.897 "passthru": { 00:32:25.897 "name": "pt1", 00:32:25.897 "base_bdev_name": "malloc1" 00:32:25.897 } 00:32:25.897 } 00:32:25.897 }' 00:32:25.897 11:14:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:25.897 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:26.174 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:26.174 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:26.174 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:26.174 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:26.174 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:26.174 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:26.174 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:26.174 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:26.174 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:26.445 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:26.445 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:26.445 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:26.445 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:26.445 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:26.445 "name": "pt2", 00:32:26.445 "aliases": [ 00:32:26.445 "00000000-0000-0000-0000-000000000002" 00:32:26.445 ], 00:32:26.445 "product_name": "passthru", 00:32:26.445 "block_size": 4096, 00:32:26.445 "num_blocks": 8192, 00:32:26.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:26.445 "md_size": 32, 00:32:26.445 "md_interleave": false, 00:32:26.445 "dif_type": 0, 00:32:26.445 "assigned_rate_limits": { 00:32:26.445 "rw_ios_per_sec": 0, 00:32:26.445 "rw_mbytes_per_sec": 0, 00:32:26.445 "r_mbytes_per_sec": 0, 00:32:26.445 "w_mbytes_per_sec": 0 00:32:26.445 }, 00:32:26.445 "claimed": true, 00:32:26.445 "claim_type": "exclusive_write", 00:32:26.445 "zoned": false, 00:32:26.445 "supported_io_types": { 00:32:26.445 "read": true, 00:32:26.445 "write": true, 00:32:26.446 "unmap": true, 00:32:26.446 "flush": true, 00:32:26.446 "reset": true, 00:32:26.446 "nvme_admin": false, 00:32:26.446 "nvme_io": false, 00:32:26.446 "nvme_io_md": false, 00:32:26.446 "write_zeroes": true, 00:32:26.446 "zcopy": true, 00:32:26.446 "get_zone_info": false, 00:32:26.446 "zone_management": false, 00:32:26.446 "zone_append": false, 00:32:26.446 "compare": false, 00:32:26.446 "compare_and_write": false, 00:32:26.446 "abort": true, 00:32:26.446 "seek_hole": false, 00:32:26.446 "seek_data": false, 00:32:26.446 "copy": true, 00:32:26.446 "nvme_iov_md": false 00:32:26.446 }, 00:32:26.446 "memory_domains": [ 00:32:26.446 { 00:32:26.446 "dma_device_id": "system", 00:32:26.446 "dma_device_type": 1 00:32:26.446 }, 00:32:26.446 { 00:32:26.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:26.446 "dma_device_type": 2 00:32:26.446 } 00:32:26.446 ], 00:32:26.446 "driver_specific": { 00:32:26.446 "passthru": { 00:32:26.446 "name": "pt2", 00:32:26.446 "base_bdev_name": "malloc2" 00:32:26.446 } 00:32:26.446 } 00:32:26.446 }' 00:32:26.446 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:26.703 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:26.703 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:26.703 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:26.703 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:26.703 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:26.703 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:26.703 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:26.703 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:26.703 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:26.961 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:26.961 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:26.961 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:32:26.961 11:14:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:27.219 [2024-07-25 11:14:34.118853] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:27.219 11:14:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=70097ef2-2d58-4bac-b1bd-f5fba8196f8d 00:32:27.219 11:14:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' -z 70097ef2-2d58-4bac-b1bd-f5fba8196f8d ']' 00:32:27.219 11:14:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:27.477 [2024-07-25 11:14:34.347131] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:27.477 [2024-07-25 11:14:34.347171] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:27.477 [2024-07-25 11:14:34.347265] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:27.477 [2024-07-25 11:14:34.347339] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:27.477 [2024-07-25 11:14:34.347364] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:27.477 11:14:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.477 11:14:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:32:27.477 11:14:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:32:27.477 11:14:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:32:27.477 11:14:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:32:27.477 11:14:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:27.735 11:14:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:32:27.735 11:14:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:27.993 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:32:27.993 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:32:28.265 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:28.526 [2024-07-25 11:14:35.490186] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:28.526 [2024-07-25 11:14:35.492499] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:28.526 [2024-07-25 11:14:35.492572] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:28.526 [2024-07-25 11:14:35.492631] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:28.526 [2024-07-25 11:14:35.492655] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:28.526 [2024-07-25 11:14:35.492671] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:32:28.526 request: 00:32:28.526 { 00:32:28.526 "name": "raid_bdev1", 00:32:28.526 "raid_level": "raid1", 00:32:28.526 "base_bdevs": [ 00:32:28.526 "malloc1", 00:32:28.526 "malloc2" 00:32:28.526 ], 00:32:28.526 "superblock": false, 00:32:28.526 "method": "bdev_raid_create", 00:32:28.526 "req_id": 1 00:32:28.526 } 00:32:28.526 Got JSON-RPC error response 00:32:28.526 response: 00:32:28.526 { 00:32:28.526 "code": -17, 00:32:28.526 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:28.526 } 00:32:28.526 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:32:28.526 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:28.526 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:28.526 11:14:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:28.526 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.526 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:32:28.784 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:32:28.784 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:32:28.784 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:29.042 [2024-07-25 11:14:35.935326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:29.042 [2024-07-25 11:14:35.935389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:29.042 [2024-07-25 11:14:35.935412] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:32:29.042 [2024-07-25 11:14:35.935430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:29.042 [2024-07-25 11:14:35.937952] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:29.042 [2024-07-25 11:14:35.937989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:29.042 [2024-07-25 11:14:35.938047] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:29.042 [2024-07-25 11:14:35.938116] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:29.042 pt1 00:32:29.042 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:32:29.042 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:29.042 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:29.042 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:29.042 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:29.042 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:29.042 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:29.042 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:29.042 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:29.042 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:29.043 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.043 11:14:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:29.301 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:29.301 "name": "raid_bdev1", 00:32:29.301 "uuid": "70097ef2-2d58-4bac-b1bd-f5fba8196f8d", 00:32:29.301 "strip_size_kb": 0, 00:32:29.301 "state": "configuring", 00:32:29.301 "raid_level": "raid1", 00:32:29.301 "superblock": true, 00:32:29.301 "num_base_bdevs": 2, 00:32:29.301 "num_base_bdevs_discovered": 1, 00:32:29.301 "num_base_bdevs_operational": 2, 00:32:29.301 "base_bdevs_list": [ 00:32:29.301 { 00:32:29.301 "name": "pt1", 00:32:29.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:29.301 "is_configured": true, 00:32:29.301 "data_offset": 256, 00:32:29.301 "data_size": 7936 00:32:29.301 }, 00:32:29.301 { 00:32:29.301 "name": null, 00:32:29.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:29.301 "is_configured": false, 00:32:29.301 "data_offset": 256, 00:32:29.301 "data_size": 7936 00:32:29.301 } 00:32:29.301 ] 00:32:29.301 }' 00:32:29.301 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:29.301 11:14:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:29.868 [2024-07-25 11:14:36.934022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:29.868 [2024-07-25 11:14:36.934088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:29.868 [2024-07-25 11:14:36.934114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:32:29.868 [2024-07-25 11:14:36.934148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:29.868 [2024-07-25 11:14:36.934456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:29.868 [2024-07-25 11:14:36.934479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:29.868 [2024-07-25 11:14:36.934535] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:29.868 [2024-07-25 11:14:36.934563] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:29.868 [2024-07-25 11:14:36.934737] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:32:29.868 [2024-07-25 11:14:36.934756] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:29.868 [2024-07-25 11:14:36.934836] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:32:29.868 [2024-07-25 11:14:36.935001] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:32:29.868 [2024-07-25 11:14:36.935015] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:32:29.868 [2024-07-25 11:14:36.935162] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:29.868 pt2 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.868 11:14:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.127 11:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:30.127 "name": "raid_bdev1", 00:32:30.127 "uuid": "70097ef2-2d58-4bac-b1bd-f5fba8196f8d", 00:32:30.127 "strip_size_kb": 0, 00:32:30.127 "state": "online", 00:32:30.127 "raid_level": "raid1", 00:32:30.127 "superblock": true, 00:32:30.127 "num_base_bdevs": 2, 00:32:30.127 "num_base_bdevs_discovered": 2, 00:32:30.127 "num_base_bdevs_operational": 2, 00:32:30.127 "base_bdevs_list": [ 00:32:30.127 { 00:32:30.127 "name": "pt1", 00:32:30.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:30.127 "is_configured": true, 00:32:30.127 "data_offset": 256, 00:32:30.127 "data_size": 7936 00:32:30.127 }, 00:32:30.127 { 00:32:30.127 "name": "pt2", 00:32:30.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:30.127 "is_configured": true, 00:32:30.127 "data_offset": 256, 00:32:30.127 "data_size": 7936 00:32:30.127 } 00:32:30.127 ] 00:32:30.127 }' 00:32:30.127 11:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:30.127 11:14:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:30.693 11:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:32:30.693 11:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:30.693 11:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:30.693 11:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:30.693 11:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:30.693 11:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:32:30.693 11:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:30.693 11:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:30.951 [2024-07-25 11:14:37.945340] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:30.951 11:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:30.951 "name": "raid_bdev1", 00:32:30.951 "aliases": [ 00:32:30.951 "70097ef2-2d58-4bac-b1bd-f5fba8196f8d" 00:32:30.951 ], 00:32:30.951 "product_name": "Raid Volume", 00:32:30.951 "block_size": 4096, 00:32:30.952 "num_blocks": 7936, 00:32:30.952 "uuid": "70097ef2-2d58-4bac-b1bd-f5fba8196f8d", 00:32:30.952 "md_size": 32, 00:32:30.952 "md_interleave": false, 00:32:30.952 "dif_type": 0, 00:32:30.952 "assigned_rate_limits": { 00:32:30.952 "rw_ios_per_sec": 0, 00:32:30.952 "rw_mbytes_per_sec": 0, 00:32:30.952 "r_mbytes_per_sec": 0, 00:32:30.952 "w_mbytes_per_sec": 0 00:32:30.952 }, 00:32:30.952 "claimed": false, 00:32:30.952 "zoned": false, 00:32:30.952 "supported_io_types": { 00:32:30.952 "read": true, 00:32:30.952 "write": true, 00:32:30.952 "unmap": false, 00:32:30.952 "flush": false, 00:32:30.952 "reset": true, 00:32:30.952 "nvme_admin": false, 00:32:30.952 "nvme_io": false, 00:32:30.952 "nvme_io_md": false, 00:32:30.952 "write_zeroes": true, 00:32:30.952 "zcopy": false, 00:32:30.952 "get_zone_info": false, 00:32:30.952 "zone_management": false, 00:32:30.952 "zone_append": false, 00:32:30.952 "compare": false, 00:32:30.952 "compare_and_write": false, 00:32:30.952 "abort": false, 00:32:30.952 "seek_hole": false, 00:32:30.952 "seek_data": false, 00:32:30.952 "copy": false, 00:32:30.952 "nvme_iov_md": false 00:32:30.952 }, 00:32:30.952 "memory_domains": [ 00:32:30.952 { 00:32:30.952 "dma_device_id": "system", 00:32:30.952 "dma_device_type": 1 00:32:30.952 }, 00:32:30.952 { 00:32:30.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.952 "dma_device_type": 2 00:32:30.952 }, 00:32:30.952 { 00:32:30.952 "dma_device_id": "system", 00:32:30.952 "dma_device_type": 1 00:32:30.952 }, 00:32:30.952 { 00:32:30.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.952 "dma_device_type": 2 00:32:30.952 } 00:32:30.952 ], 00:32:30.952 "driver_specific": { 00:32:30.952 "raid": { 00:32:30.952 "uuid": "70097ef2-2d58-4bac-b1bd-f5fba8196f8d", 00:32:30.952 "strip_size_kb": 0, 00:32:30.952 "state": "online", 00:32:30.952 "raid_level": "raid1", 00:32:30.952 "superblock": true, 00:32:30.952 "num_base_bdevs": 2, 00:32:30.952 "num_base_bdevs_discovered": 2, 00:32:30.952 "num_base_bdevs_operational": 2, 00:32:30.952 "base_bdevs_list": [ 00:32:30.952 { 00:32:30.952 "name": "pt1", 00:32:30.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:30.952 "is_configured": true, 00:32:30.952 "data_offset": 256, 00:32:30.952 "data_size": 7936 00:32:30.952 }, 00:32:30.952 { 00:32:30.952 "name": "pt2", 00:32:30.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:30.952 "is_configured": true, 00:32:30.952 "data_offset": 256, 00:32:30.952 "data_size": 7936 00:32:30.952 } 00:32:30.952 ] 00:32:30.952 } 00:32:30.952 } 00:32:30.952 }' 00:32:30.952 11:14:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:30.952 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:30.952 pt2' 00:32:30.952 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:30.952 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:30.952 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:31.210 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:31.210 "name": "pt1", 00:32:31.210 "aliases": [ 00:32:31.210 "00000000-0000-0000-0000-000000000001" 00:32:31.210 ], 00:32:31.210 "product_name": "passthru", 00:32:31.210 "block_size": 4096, 00:32:31.210 "num_blocks": 8192, 00:32:31.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:31.210 "md_size": 32, 00:32:31.210 "md_interleave": false, 00:32:31.210 "dif_type": 0, 00:32:31.210 "assigned_rate_limits": { 00:32:31.210 "rw_ios_per_sec": 0, 00:32:31.210 "rw_mbytes_per_sec": 0, 00:32:31.210 "r_mbytes_per_sec": 0, 00:32:31.210 "w_mbytes_per_sec": 0 00:32:31.210 }, 00:32:31.210 "claimed": true, 00:32:31.210 "claim_type": "exclusive_write", 00:32:31.210 "zoned": false, 00:32:31.210 "supported_io_types": { 00:32:31.210 "read": true, 00:32:31.210 "write": true, 00:32:31.210 "unmap": true, 00:32:31.210 "flush": true, 00:32:31.210 "reset": true, 00:32:31.210 "nvme_admin": false, 00:32:31.210 "nvme_io": false, 00:32:31.210 "nvme_io_md": false, 00:32:31.210 "write_zeroes": true, 00:32:31.210 "zcopy": true, 00:32:31.210 "get_zone_info": false, 00:32:31.210 "zone_management": false, 00:32:31.210 "zone_append": false, 00:32:31.210 "compare": false, 00:32:31.210 "compare_and_write": false, 00:32:31.210 "abort": true, 00:32:31.210 "seek_hole": false, 00:32:31.210 "seek_data": false, 00:32:31.210 "copy": true, 00:32:31.210 "nvme_iov_md": false 00:32:31.210 }, 00:32:31.210 "memory_domains": [ 00:32:31.210 { 00:32:31.210 "dma_device_id": "system", 00:32:31.210 "dma_device_type": 1 00:32:31.210 }, 00:32:31.210 { 00:32:31.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.210 "dma_device_type": 2 00:32:31.210 } 00:32:31.210 ], 00:32:31.210 "driver_specific": { 00:32:31.211 "passthru": { 00:32:31.211 "name": "pt1", 00:32:31.211 "base_bdev_name": "malloc1" 00:32:31.211 } 00:32:31.211 } 00:32:31.211 }' 00:32:31.211 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:31.211 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:31.211 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:31.469 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:31.469 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:31.469 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:31.469 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:31.469 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:31.469 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:31.469 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:31.469 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:31.469 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:31.469 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:31.727 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:31.727 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:31.727 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:31.727 "name": "pt2", 00:32:31.727 "aliases": [ 00:32:31.727 "00000000-0000-0000-0000-000000000002" 00:32:31.727 ], 00:32:31.727 "product_name": "passthru", 00:32:31.727 "block_size": 4096, 00:32:31.727 "num_blocks": 8192, 00:32:31.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:31.727 "md_size": 32, 00:32:31.727 "md_interleave": false, 00:32:31.727 "dif_type": 0, 00:32:31.727 "assigned_rate_limits": { 00:32:31.727 "rw_ios_per_sec": 0, 00:32:31.727 "rw_mbytes_per_sec": 0, 00:32:31.727 "r_mbytes_per_sec": 0, 00:32:31.727 "w_mbytes_per_sec": 0 00:32:31.727 }, 00:32:31.727 "claimed": true, 00:32:31.727 "claim_type": "exclusive_write", 00:32:31.727 "zoned": false, 00:32:31.727 "supported_io_types": { 00:32:31.727 "read": true, 00:32:31.727 "write": true, 00:32:31.727 "unmap": true, 00:32:31.727 "flush": true, 00:32:31.727 "reset": true, 00:32:31.727 "nvme_admin": false, 00:32:31.727 "nvme_io": false, 00:32:31.727 "nvme_io_md": false, 00:32:31.727 "write_zeroes": true, 00:32:31.727 "zcopy": true, 00:32:31.727 "get_zone_info": false, 00:32:31.727 "zone_management": false, 00:32:31.727 "zone_append": false, 00:32:31.727 "compare": false, 00:32:31.727 "compare_and_write": false, 00:32:31.727 "abort": true, 00:32:31.727 "seek_hole": false, 00:32:31.727 "seek_data": false, 00:32:31.727 "copy": true, 00:32:31.727 "nvme_iov_md": false 00:32:31.727 }, 00:32:31.727 "memory_domains": [ 00:32:31.727 { 00:32:31.727 "dma_device_id": "system", 00:32:31.727 "dma_device_type": 1 00:32:31.727 }, 00:32:31.727 { 00:32:31.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.727 "dma_device_type": 2 00:32:31.727 } 00:32:31.727 ], 00:32:31.727 "driver_specific": { 00:32:31.727 "passthru": { 00:32:31.727 "name": "pt2", 00:32:31.727 "base_bdev_name": "malloc2" 00:32:31.727 } 00:32:31.727 } 00:32:31.727 }' 00:32:31.727 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:31.985 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:31.985 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:31.985 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:31.985 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:31.985 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:31.985 11:14:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:31.985 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:31.985 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:31.985 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:32.244 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:32.244 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:32.244 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:32.244 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:32:32.502 [2024-07-25 11:14:39.365210] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@502 -- # '[' 70097ef2-2d58-4bac-b1bd-f5fba8196f8d '!=' 70097ef2-2d58-4bac-b1bd-f5fba8196f8d ']' 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:32.502 [2024-07-25 11:14:39.593476] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:32.502 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:32.760 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:32.760 "name": "raid_bdev1", 00:32:32.760 "uuid": "70097ef2-2d58-4bac-b1bd-f5fba8196f8d", 00:32:32.760 "strip_size_kb": 0, 00:32:32.760 "state": "online", 00:32:32.760 "raid_level": "raid1", 00:32:32.760 "superblock": true, 00:32:32.760 "num_base_bdevs": 2, 00:32:32.760 "num_base_bdevs_discovered": 1, 00:32:32.760 "num_base_bdevs_operational": 1, 00:32:32.760 "base_bdevs_list": [ 00:32:32.760 { 00:32:32.760 "name": null, 00:32:32.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.760 "is_configured": false, 00:32:32.760 "data_offset": 256, 00:32:32.760 "data_size": 7936 00:32:32.760 }, 00:32:32.760 { 00:32:32.760 "name": "pt2", 00:32:32.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:32.760 "is_configured": true, 00:32:32.760 "data_offset": 256, 00:32:32.760 "data_size": 7936 00:32:32.760 } 00:32:32.760 ] 00:32:32.760 }' 00:32:32.760 11:14:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:32.760 11:14:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:33.326 11:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:33.584 [2024-07-25 11:14:40.556047] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:33.584 [2024-07-25 11:14:40.556080] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:33.584 [2024-07-25 11:14:40.556174] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:33.584 [2024-07-25 11:14:40.556232] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:33.584 [2024-07-25 11:14:40.556251] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:32:33.584 11:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:33.584 11:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:32:33.842 11:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:32:33.842 11:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:32:33.842 11:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:32:33.842 11:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:32:33.842 11:14:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@534 -- # i=1 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:34.101 [2024-07-25 11:14:41.185721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:34.101 [2024-07-25 11:14:41.185785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:34.101 [2024-07-25 11:14:41.185808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:32:34.101 [2024-07-25 11:14:41.185825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:34.101 [2024-07-25 11:14:41.188287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:34.101 [2024-07-25 11:14:41.188321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:34.101 [2024-07-25 11:14:41.188376] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:34.101 [2024-07-25 11:14:41.188436] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:34.101 [2024-07-25 11:14:41.188591] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:32:34.101 [2024-07-25 11:14:41.188608] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:34.101 [2024-07-25 11:14:41.188694] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:32:34.101 [2024-07-25 11:14:41.188902] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:32:34.101 [2024-07-25 11:14:41.188916] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:32:34.101 [2024-07-25 11:14:41.189063] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:34.101 pt2 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.101 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:34.359 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:34.359 "name": "raid_bdev1", 00:32:34.359 "uuid": "70097ef2-2d58-4bac-b1bd-f5fba8196f8d", 00:32:34.359 "strip_size_kb": 0, 00:32:34.359 "state": "online", 00:32:34.359 "raid_level": "raid1", 00:32:34.359 "superblock": true, 00:32:34.359 "num_base_bdevs": 2, 00:32:34.359 "num_base_bdevs_discovered": 1, 00:32:34.359 "num_base_bdevs_operational": 1, 00:32:34.359 "base_bdevs_list": [ 00:32:34.359 { 00:32:34.359 "name": null, 00:32:34.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:34.359 "is_configured": false, 00:32:34.359 "data_offset": 256, 00:32:34.359 "data_size": 7936 00:32:34.359 }, 00:32:34.359 { 00:32:34.359 "name": "pt2", 00:32:34.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:34.359 "is_configured": true, 00:32:34.359 "data_offset": 256, 00:32:34.359 "data_size": 7936 00:32:34.359 } 00:32:34.359 ] 00:32:34.359 }' 00:32:34.359 11:14:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:34.359 11:14:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:34.925 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:35.183 [2024-07-25 11:14:42.228581] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:35.183 [2024-07-25 11:14:42.228616] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:35.183 [2024-07-25 11:14:42.228689] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:35.183 [2024-07-25 11:14:42.228752] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:35.183 [2024-07-25 11:14:42.228768] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:32:35.183 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:35.183 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:32:35.441 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:32:35.441 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:32:35.441 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:32:35.441 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:35.699 [2024-07-25 11:14:42.689783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:35.699 [2024-07-25 11:14:42.689847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:35.699 [2024-07-25 11:14:42.689874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:32:35.699 [2024-07-25 11:14:42.689889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:35.699 [2024-07-25 11:14:42.692465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:35.699 [2024-07-25 11:14:42.692499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:35.700 [2024-07-25 11:14:42.692565] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:35.700 [2024-07-25 11:14:42.692627] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:35.700 [2024-07-25 11:14:42.692836] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:32:35.700 [2024-07-25 11:14:42.692854] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:35.700 [2024-07-25 11:14:42.692880] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:32:35.700 [2024-07-25 11:14:42.692970] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:35.700 [2024-07-25 11:14:42.693050] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:32:35.700 [2024-07-25 11:14:42.693064] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:35.700 [2024-07-25 11:14:42.693149] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:32:35.700 [2024-07-25 11:14:42.693339] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:32:35.700 [2024-07-25 11:14:42.693356] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:32:35.700 [2024-07-25 11:14:42.693501] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:35.700 pt1 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:35.700 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:35.959 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:35.959 "name": "raid_bdev1", 00:32:35.959 "uuid": "70097ef2-2d58-4bac-b1bd-f5fba8196f8d", 00:32:35.959 "strip_size_kb": 0, 00:32:35.959 "state": "online", 00:32:35.959 "raid_level": "raid1", 00:32:35.959 "superblock": true, 00:32:35.959 "num_base_bdevs": 2, 00:32:35.959 "num_base_bdevs_discovered": 1, 00:32:35.959 "num_base_bdevs_operational": 1, 00:32:35.959 "base_bdevs_list": [ 00:32:35.959 { 00:32:35.959 "name": null, 00:32:35.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.959 "is_configured": false, 00:32:35.959 "data_offset": 256, 00:32:35.959 "data_size": 7936 00:32:35.959 }, 00:32:35.959 { 00:32:35.959 "name": "pt2", 00:32:35.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:35.959 "is_configured": true, 00:32:35.959 "data_offset": 256, 00:32:35.959 "data_size": 7936 00:32:35.959 } 00:32:35.959 ] 00:32:35.959 }' 00:32:35.959 11:14:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:35.959 11:14:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:36.525 11:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:32:36.525 11:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:32:36.783 11:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:32:36.783 11:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:36.783 11:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:32:37.042 [2024-07-25 11:14:43.949496] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:37.042 11:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@573 -- # '[' 70097ef2-2d58-4bac-b1bd-f5fba8196f8d '!=' 70097ef2-2d58-4bac-b1bd-f5fba8196f8d ']' 00:32:37.042 11:14:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@578 -- # killprocess 3750278 00:32:37.042 11:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 3750278 ']' 00:32:37.042 11:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 3750278 00:32:37.042 11:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:32:37.042 11:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:37.042 11:14:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3750278 00:32:37.042 11:14:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:37.042 11:14:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:37.042 11:14:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3750278' 00:32:37.042 killing process with pid 3750278 00:32:37.042 11:14:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 3750278 00:32:37.042 [2024-07-25 11:14:44.024246] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:37.042 [2024-07-25 11:14:44.024341] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:37.042 [2024-07-25 11:14:44.024398] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:37.042 11:14:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 3750278 00:32:37.042 [2024-07-25 11:14:44.024418] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:32:37.308 [2024-07-25 11:14:44.336350] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:39.212 11:14:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@580 -- # return 0 00:32:39.212 00:32:39.212 real 0m16.723s 00:32:39.212 user 0m28.458s 00:32:39.212 sys 0m2.920s 00:32:39.212 11:14:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:39.212 11:14:46 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:39.212 ************************************ 00:32:39.212 END TEST raid_superblock_test_md_separate 00:32:39.212 ************************************ 00:32:39.212 11:14:46 bdev_raid -- bdev/bdev_raid.sh@987 -- # '[' true = true ']' 00:32:39.212 11:14:46 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:32:39.212 11:14:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:32:39.212 11:14:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:39.212 11:14:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:39.212 ************************************ 00:32:39.212 START TEST raid_rebuild_test_sb_md_separate 00:32:39.212 ************************************ 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # local verify=true 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # local strip_size 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # local create_arg 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@594 -- # local data_offset 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:32:39.212 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # raid_pid=3753240 00:32:39.213 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # waitforlisten 3753240 /var/tmp/spdk-raid.sock 00:32:39.213 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:39.213 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 3753240 ']' 00:32:39.213 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:39.213 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:39.213 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:39.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:39.213 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:39.213 11:14:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:39.213 [2024-07-25 11:14:46.234834] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:39.213 [2024-07-25 11:14:46.234957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753240 ] 00:32:39.213 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:39.213 Zero copy mechanism will not be used. 00:32:39.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.498 EAL: Requested device 0000:3d:01.0 cannot be used 00:32:39.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.498 EAL: Requested device 0000:3d:01.1 cannot be used 00:32:39.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.498 EAL: Requested device 0000:3d:01.2 cannot be used 00:32:39.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.498 EAL: Requested device 0000:3d:01.3 cannot be used 00:32:39.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.498 EAL: Requested device 0000:3d:01.4 cannot be used 00:32:39.498 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3d:01.5 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3d:01.6 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3d:01.7 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3d:02.0 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3d:02.1 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3d:02.2 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3d:02.3 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3d:02.4 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3d:02.5 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3d:02.6 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3d:02.7 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:01.0 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:01.1 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:01.2 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:01.3 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:01.4 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:01.5 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:01.6 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:01.7 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:02.0 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:02.1 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:02.2 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:02.3 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:02.4 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:02.5 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:02.6 cannot be used 00:32:39.499 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:32:39.499 EAL: Requested device 0000:3f:02.7 cannot be used 00:32:39.499 [2024-07-25 11:14:46.464358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.758 [2024-07-25 11:14:46.750247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.016 [2024-07-25 11:14:47.095551] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:40.016 [2024-07-25 11:14:47.095586] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:40.274 11:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:40.274 11:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:32:40.274 11:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:40.274 11:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:32:40.531 BaseBdev1_malloc 00:32:40.531 11:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:40.789 [2024-07-25 11:14:47.767705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:40.789 [2024-07-25 11:14:47.767774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:40.789 [2024-07-25 11:14:47.767805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:32:40.789 [2024-07-25 11:14:47.767825] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:40.789 [2024-07-25 11:14:47.770342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:40.789 [2024-07-25 11:14:47.770379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:40.789 BaseBdev1 00:32:40.789 11:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:32:40.789 11:14:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:32:41.047 BaseBdev2_malloc 00:32:41.047 11:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:41.306 [2024-07-25 11:14:48.277419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:41.306 [2024-07-25 11:14:48.277481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:41.306 [2024-07-25 11:14:48.277508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:32:41.306 [2024-07-25 11:14:48.277529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:41.306 [2024-07-25 11:14:48.280030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:41.306 [2024-07-25 11:14:48.280066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:41.306 BaseBdev2 00:32:41.306 11:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:32:41.564 spare_malloc 00:32:41.564 11:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:41.822 spare_delay 00:32:41.822 11:14:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:42.079 [2024-07-25 11:14:49.012424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:42.079 [2024-07-25 11:14:49.012483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:42.079 [2024-07-25 11:14:49.012514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:32:42.079 [2024-07-25 11:14:49.012533] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:42.079 [2024-07-25 11:14:49.015023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:42.079 [2024-07-25 11:14:49.015060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:42.079 spare 00:32:42.079 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:32:42.337 [2024-07-25 11:14:49.237072] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:42.337 [2024-07-25 11:14:49.239421] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:42.337 [2024-07-25 11:14:49.239639] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:42.337 [2024-07-25 11:14:49.239661] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:42.337 [2024-07-25 11:14:49.239769] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:32:42.337 [2024-07-25 11:14:49.239978] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:42.337 [2024-07-25 11:14:49.239995] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:42.338 [2024-07-25 11:14:49.240178] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:42.338 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:42.338 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:42.338 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:42.338 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:42.338 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:42.338 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:42.338 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:42.338 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:42.338 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:42.338 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:42.338 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:42.338 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.596 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:42.596 "name": "raid_bdev1", 00:32:42.596 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:42.596 "strip_size_kb": 0, 00:32:42.596 "state": "online", 00:32:42.596 "raid_level": "raid1", 00:32:42.596 "superblock": true, 00:32:42.596 "num_base_bdevs": 2, 00:32:42.596 "num_base_bdevs_discovered": 2, 00:32:42.596 "num_base_bdevs_operational": 2, 00:32:42.596 "base_bdevs_list": [ 00:32:42.596 { 00:32:42.596 "name": "BaseBdev1", 00:32:42.596 "uuid": "d2e28829-bd6d-58f0-a1d3-4c073f938534", 00:32:42.596 "is_configured": true, 00:32:42.596 "data_offset": 256, 00:32:42.596 "data_size": 7936 00:32:42.596 }, 00:32:42.596 { 00:32:42.596 "name": "BaseBdev2", 00:32:42.596 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:42.596 "is_configured": true, 00:32:42.596 "data_offset": 256, 00:32:42.596 "data_size": 7936 00:32:42.596 } 00:32:42.596 ] 00:32:42.596 }' 00:32:42.596 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:42.596 11:14:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:43.161 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:43.161 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:32:43.161 [2024-07-25 11:14:50.268199] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # '[' true = true ']' 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # local write_unit_size 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:43.419 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:43.677 [2024-07-25 11:14:50.725113] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:32:43.677 /dev/nbd0 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:43.677 1+0 records in 00:32:43.677 1+0 records out 00:32:43.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257352 s, 15.9 MB/s 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid5f ']' 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@648 -- # write_unit_size=1 00:32:43.677 11:14:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:32:44.611 7936+0 records in 00:32:44.611 7936+0 records out 00:32:44.611 32505856 bytes (33 MB, 31 MiB) copied, 0.808451 s, 40.2 MB/s 00:32:44.611 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@651 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:44.611 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:44.611 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:44.611 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:44.611 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:32:44.611 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:44.611 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:44.869 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:44.869 [2024-07-25 11:14:51.843238] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:44.869 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:44.869 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:44.869 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:44.869 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:44.869 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:44.869 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:44.869 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:44.869 11:14:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:45.127 [2024-07-25 11:14:52.063946] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:45.127 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:45.127 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:45.127 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:45.127 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:45.127 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:45.127 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:45.127 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:45.127 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:45.127 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:45.127 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:45.127 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.127 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:45.385 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:45.385 "name": "raid_bdev1", 00:32:45.385 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:45.385 "strip_size_kb": 0, 00:32:45.385 "state": "online", 00:32:45.385 "raid_level": "raid1", 00:32:45.385 "superblock": true, 00:32:45.385 "num_base_bdevs": 2, 00:32:45.385 "num_base_bdevs_discovered": 1, 00:32:45.385 "num_base_bdevs_operational": 1, 00:32:45.385 "base_bdevs_list": [ 00:32:45.385 { 00:32:45.385 "name": null, 00:32:45.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.385 "is_configured": false, 00:32:45.385 "data_offset": 256, 00:32:45.385 "data_size": 7936 00:32:45.385 }, 00:32:45.385 { 00:32:45.385 "name": "BaseBdev2", 00:32:45.385 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:45.385 "is_configured": true, 00:32:45.385 "data_offset": 256, 00:32:45.385 "data_size": 7936 00:32:45.385 } 00:32:45.385 ] 00:32:45.385 }' 00:32:45.385 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:45.385 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:45.951 11:14:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:45.951 [2024-07-25 11:14:53.026629] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:45.951 [2024-07-25 11:14:53.049019] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a4410 00:32:45.951 [2024-07-25 11:14:53.051317] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:45.951 11:14:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:47.325 "name": "raid_bdev1", 00:32:47.325 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:47.325 "strip_size_kb": 0, 00:32:47.325 "state": "online", 00:32:47.325 "raid_level": "raid1", 00:32:47.325 "superblock": true, 00:32:47.325 "num_base_bdevs": 2, 00:32:47.325 "num_base_bdevs_discovered": 2, 00:32:47.325 "num_base_bdevs_operational": 2, 00:32:47.325 "process": { 00:32:47.325 "type": "rebuild", 00:32:47.325 "target": "spare", 00:32:47.325 "progress": { 00:32:47.325 "blocks": 2816, 00:32:47.325 "percent": 35 00:32:47.325 } 00:32:47.325 }, 00:32:47.325 "base_bdevs_list": [ 00:32:47.325 { 00:32:47.325 "name": "spare", 00:32:47.325 "uuid": "c4c3948b-84e8-577b-abd8-9566b330234d", 00:32:47.325 "is_configured": true, 00:32:47.325 "data_offset": 256, 00:32:47.325 "data_size": 7936 00:32:47.325 }, 00:32:47.325 { 00:32:47.325 "name": "BaseBdev2", 00:32:47.325 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:47.325 "is_configured": true, 00:32:47.325 "data_offset": 256, 00:32:47.325 "data_size": 7936 00:32:47.325 } 00:32:47.325 ] 00:32:47.325 }' 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:47.325 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:47.583 [2024-07-25 11:14:54.553068] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:47.583 [2024-07-25 11:14:54.563552] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:47.584 [2024-07-25 11:14:54.563615] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:47.584 [2024-07-25 11:14:54.563636] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:47.584 [2024-07-25 11:14:54.563651] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:47.584 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:47.584 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:47.584 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:47.584 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:47.584 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:47.584 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:47.584 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:47.584 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:47.584 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:47.584 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:47.584 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.584 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:47.842 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:47.842 "name": "raid_bdev1", 00:32:47.842 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:47.842 "strip_size_kb": 0, 00:32:47.842 "state": "online", 00:32:47.842 "raid_level": "raid1", 00:32:47.842 "superblock": true, 00:32:47.842 "num_base_bdevs": 2, 00:32:47.842 "num_base_bdevs_discovered": 1, 00:32:47.842 "num_base_bdevs_operational": 1, 00:32:47.842 "base_bdevs_list": [ 00:32:47.842 { 00:32:47.842 "name": null, 00:32:47.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:47.842 "is_configured": false, 00:32:47.842 "data_offset": 256, 00:32:47.842 "data_size": 7936 00:32:47.842 }, 00:32:47.842 { 00:32:47.842 "name": "BaseBdev2", 00:32:47.842 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:47.842 "is_configured": true, 00:32:47.842 "data_offset": 256, 00:32:47.842 "data_size": 7936 00:32:47.842 } 00:32:47.842 ] 00:32:47.842 }' 00:32:47.842 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:47.842 11:14:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:48.408 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:48.408 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:48.408 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:48.408 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:48.408 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:48.408 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:48.408 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:48.665 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:48.665 "name": "raid_bdev1", 00:32:48.665 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:48.665 "strip_size_kb": 0, 00:32:48.665 "state": "online", 00:32:48.665 "raid_level": "raid1", 00:32:48.665 "superblock": true, 00:32:48.665 "num_base_bdevs": 2, 00:32:48.665 "num_base_bdevs_discovered": 1, 00:32:48.665 "num_base_bdevs_operational": 1, 00:32:48.666 "base_bdevs_list": [ 00:32:48.666 { 00:32:48.666 "name": null, 00:32:48.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:48.666 "is_configured": false, 00:32:48.666 "data_offset": 256, 00:32:48.666 "data_size": 7936 00:32:48.666 }, 00:32:48.666 { 00:32:48.666 "name": "BaseBdev2", 00:32:48.666 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:48.666 "is_configured": true, 00:32:48.666 "data_offset": 256, 00:32:48.666 "data_size": 7936 00:32:48.666 } 00:32:48.666 ] 00:32:48.666 }' 00:32:48.666 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:48.666 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:48.666 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:48.666 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:48.666 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:48.924 [2024-07-25 11:14:55.951655] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:48.924 [2024-07-25 11:14:55.974605] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001a44e0 00:32:48.924 [2024-07-25 11:14:55.976936] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:48.924 11:14:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@678 -- # sleep 1 00:32:50.299 11:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:50.299 11:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:50.299 11:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:50.299 11:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:50.299 11:14:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:50.299 "name": "raid_bdev1", 00:32:50.299 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:50.299 "strip_size_kb": 0, 00:32:50.299 "state": "online", 00:32:50.299 "raid_level": "raid1", 00:32:50.299 "superblock": true, 00:32:50.299 "num_base_bdevs": 2, 00:32:50.299 "num_base_bdevs_discovered": 2, 00:32:50.299 "num_base_bdevs_operational": 2, 00:32:50.299 "process": { 00:32:50.299 "type": "rebuild", 00:32:50.299 "target": "spare", 00:32:50.299 "progress": { 00:32:50.299 "blocks": 3072, 00:32:50.299 "percent": 38 00:32:50.299 } 00:32:50.299 }, 00:32:50.299 "base_bdevs_list": [ 00:32:50.299 { 00:32:50.299 "name": "spare", 00:32:50.299 "uuid": "c4c3948b-84e8-577b-abd8-9566b330234d", 00:32:50.299 "is_configured": true, 00:32:50.299 "data_offset": 256, 00:32:50.299 "data_size": 7936 00:32:50.299 }, 00:32:50.299 { 00:32:50.299 "name": "BaseBdev2", 00:32:50.299 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:50.299 "is_configured": true, 00:32:50.299 "data_offset": 256, 00:32:50.299 "data_size": 7936 00:32:50.299 } 00:32:50.299 ] 00:32:50.299 }' 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:32:50.299 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # local timeout=1181 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:50.299 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:50.557 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:50.557 "name": "raid_bdev1", 00:32:50.557 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:50.557 "strip_size_kb": 0, 00:32:50.557 "state": "online", 00:32:50.557 "raid_level": "raid1", 00:32:50.557 "superblock": true, 00:32:50.557 "num_base_bdevs": 2, 00:32:50.557 "num_base_bdevs_discovered": 2, 00:32:50.557 "num_base_bdevs_operational": 2, 00:32:50.557 "process": { 00:32:50.557 "type": "rebuild", 00:32:50.557 "target": "spare", 00:32:50.557 "progress": { 00:32:50.557 "blocks": 3840, 00:32:50.557 "percent": 48 00:32:50.557 } 00:32:50.557 }, 00:32:50.557 "base_bdevs_list": [ 00:32:50.557 { 00:32:50.557 "name": "spare", 00:32:50.557 "uuid": "c4c3948b-84e8-577b-abd8-9566b330234d", 00:32:50.557 "is_configured": true, 00:32:50.557 "data_offset": 256, 00:32:50.557 "data_size": 7936 00:32:50.557 }, 00:32:50.557 { 00:32:50.557 "name": "BaseBdev2", 00:32:50.557 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:50.557 "is_configured": true, 00:32:50.557 "data_offset": 256, 00:32:50.557 "data_size": 7936 00:32:50.557 } 00:32:50.557 ] 00:32:50.557 }' 00:32:50.557 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:50.557 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:50.557 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:50.557 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:50.557 11:14:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:51.930 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:51.930 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:51.930 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:51.930 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:51.930 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:51.930 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:51.930 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:51.930 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:51.930 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:51.930 "name": "raid_bdev1", 00:32:51.930 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:51.930 "strip_size_kb": 0, 00:32:51.931 "state": "online", 00:32:51.931 "raid_level": "raid1", 00:32:51.931 "superblock": true, 00:32:51.931 "num_base_bdevs": 2, 00:32:51.931 "num_base_bdevs_discovered": 2, 00:32:51.931 "num_base_bdevs_operational": 2, 00:32:51.931 "process": { 00:32:51.931 "type": "rebuild", 00:32:51.931 "target": "spare", 00:32:51.931 "progress": { 00:32:51.931 "blocks": 7168, 00:32:51.931 "percent": 90 00:32:51.931 } 00:32:51.931 }, 00:32:51.931 "base_bdevs_list": [ 00:32:51.931 { 00:32:51.931 "name": "spare", 00:32:51.931 "uuid": "c4c3948b-84e8-577b-abd8-9566b330234d", 00:32:51.931 "is_configured": true, 00:32:51.931 "data_offset": 256, 00:32:51.931 "data_size": 7936 00:32:51.931 }, 00:32:51.931 { 00:32:51.931 "name": "BaseBdev2", 00:32:51.931 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:51.931 "is_configured": true, 00:32:51.931 "data_offset": 256, 00:32:51.931 "data_size": 7936 00:32:51.931 } 00:32:51.931 ] 00:32:51.931 }' 00:32:51.931 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:51.931 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:51.931 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:51.931 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:51.931 11:14:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@726 -- # sleep 1 00:32:52.189 [2024-07-25 11:14:59.102160] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:52.189 [2024-07-25 11:14:59.102233] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:52.189 [2024-07-25 11:14:59.102330] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:53.123 11:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:32:53.123 11:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:53.123 11:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:53.123 11:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:53.123 11:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:53.123 11:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:53.123 11:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.123 11:14:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.123 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:53.123 "name": "raid_bdev1", 00:32:53.123 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:53.123 "strip_size_kb": 0, 00:32:53.123 "state": "online", 00:32:53.123 "raid_level": "raid1", 00:32:53.123 "superblock": true, 00:32:53.123 "num_base_bdevs": 2, 00:32:53.123 "num_base_bdevs_discovered": 2, 00:32:53.123 "num_base_bdevs_operational": 2, 00:32:53.123 "base_bdevs_list": [ 00:32:53.123 { 00:32:53.123 "name": "spare", 00:32:53.123 "uuid": "c4c3948b-84e8-577b-abd8-9566b330234d", 00:32:53.123 "is_configured": true, 00:32:53.123 "data_offset": 256, 00:32:53.123 "data_size": 7936 00:32:53.123 }, 00:32:53.123 { 00:32:53.123 "name": "BaseBdev2", 00:32:53.123 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:53.123 "is_configured": true, 00:32:53.123 "data_offset": 256, 00:32:53.123 "data_size": 7936 00:32:53.123 } 00:32:53.123 ] 00:32:53.123 }' 00:32:53.123 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:53.382 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:53.382 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:53.382 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:53.382 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@724 -- # break 00:32:53.382 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:53.382 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:53.382 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:53.382 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:53.382 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:53.382 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.382 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:53.680 "name": "raid_bdev1", 00:32:53.680 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:53.680 "strip_size_kb": 0, 00:32:53.680 "state": "online", 00:32:53.680 "raid_level": "raid1", 00:32:53.680 "superblock": true, 00:32:53.680 "num_base_bdevs": 2, 00:32:53.680 "num_base_bdevs_discovered": 2, 00:32:53.680 "num_base_bdevs_operational": 2, 00:32:53.680 "base_bdevs_list": [ 00:32:53.680 { 00:32:53.680 "name": "spare", 00:32:53.680 "uuid": "c4c3948b-84e8-577b-abd8-9566b330234d", 00:32:53.680 "is_configured": true, 00:32:53.680 "data_offset": 256, 00:32:53.680 "data_size": 7936 00:32:53.680 }, 00:32:53.680 { 00:32:53.680 "name": "BaseBdev2", 00:32:53.680 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:53.680 "is_configured": true, 00:32:53.680 "data_offset": 256, 00:32:53.680 "data_size": 7936 00:32:53.680 } 00:32:53.680 ] 00:32:53.680 }' 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.680 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.939 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:53.939 "name": "raid_bdev1", 00:32:53.939 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:53.939 "strip_size_kb": 0, 00:32:53.939 "state": "online", 00:32:53.939 "raid_level": "raid1", 00:32:53.939 "superblock": true, 00:32:53.939 "num_base_bdevs": 2, 00:32:53.939 "num_base_bdevs_discovered": 2, 00:32:53.939 "num_base_bdevs_operational": 2, 00:32:53.939 "base_bdevs_list": [ 00:32:53.939 { 00:32:53.939 "name": "spare", 00:32:53.939 "uuid": "c4c3948b-84e8-577b-abd8-9566b330234d", 00:32:53.939 "is_configured": true, 00:32:53.939 "data_offset": 256, 00:32:53.939 "data_size": 7936 00:32:53.939 }, 00:32:53.939 { 00:32:53.939 "name": "BaseBdev2", 00:32:53.939 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:53.939 "is_configured": true, 00:32:53.939 "data_offset": 256, 00:32:53.939 "data_size": 7936 00:32:53.939 } 00:32:53.939 ] 00:32:53.939 }' 00:32:53.939 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:53.939 11:15:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:54.506 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:54.506 [2024-07-25 11:15:01.620526] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:54.506 [2024-07-25 11:15:01.620562] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:54.506 [2024-07-25 11:15:01.620652] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:54.506 [2024-07-25 11:15:01.620734] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:54.506 [2024-07-25 11:15:01.620751] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # jq length 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # '[' true = true ']' 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # '[' false = true ']' 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:54.764 11:15:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:55.023 /dev/nbd0 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:55.023 1+0 records in 00:32:55.023 1+0 records out 00:32:55.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255105 s, 16.1 MB/s 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:55.023 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:55.282 /dev/nbd1 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:55.282 1+0 records in 00:32:55.282 1+0 records out 00:32:55.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325062 s, 12.6 MB/s 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:32:55.282 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:32:55.540 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:32:55.540 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:32:55.540 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:55.541 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:55.541 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:55.541 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:55.541 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:55.541 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:55.541 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:55.541 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:32:55.541 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:55.541 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:55.798 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:55.798 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:55.798 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:55.798 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:55.798 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:55.798 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:55.798 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:55.798 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:55.798 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:55.798 11:15:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:56.057 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:56.057 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:56.057 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:56.057 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:56.057 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:56.057 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:56.057 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:32:56.057 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:32:56.057 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:32:56.057 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:56.317 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:56.576 [2024-07-25 11:15:03.548900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:56.576 [2024-07-25 11:15:03.548959] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:56.576 [2024-07-25 11:15:03.548989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042f80 00:32:56.576 [2024-07-25 11:15:03.549005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:56.576 [2024-07-25 11:15:03.551566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:56.576 [2024-07-25 11:15:03.551598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:56.576 [2024-07-25 11:15:03.551672] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:56.576 [2024-07-25 11:15:03.551731] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:56.576 [2024-07-25 11:15:03.551933] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:56.576 spare 00:32:56.576 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:56.576 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:56.576 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:56.576 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:56.576 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:56.576 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:56.576 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:56.576 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:56.576 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:56.576 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:56.576 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:56.576 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:56.576 [2024-07-25 11:15:03.652302] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:56.576 [2024-07-25 11:15:03.652334] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:56.576 [2024-07-25 11:15:03.652454] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9390 00:32:56.576 [2024-07-25 11:15:03.652691] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:56.576 [2024-07-25 11:15:03.652706] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:32:56.576 [2024-07-25 11:15:03.652885] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:56.835 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:56.835 "name": "raid_bdev1", 00:32:56.835 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:56.835 "strip_size_kb": 0, 00:32:56.835 "state": "online", 00:32:56.835 "raid_level": "raid1", 00:32:56.835 "superblock": true, 00:32:56.835 "num_base_bdevs": 2, 00:32:56.835 "num_base_bdevs_discovered": 2, 00:32:56.835 "num_base_bdevs_operational": 2, 00:32:56.835 "base_bdevs_list": [ 00:32:56.835 { 00:32:56.835 "name": "spare", 00:32:56.835 "uuid": "c4c3948b-84e8-577b-abd8-9566b330234d", 00:32:56.835 "is_configured": true, 00:32:56.835 "data_offset": 256, 00:32:56.835 "data_size": 7936 00:32:56.835 }, 00:32:56.835 { 00:32:56.835 "name": "BaseBdev2", 00:32:56.835 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:56.835 "is_configured": true, 00:32:56.835 "data_offset": 256, 00:32:56.835 "data_size": 7936 00:32:56.835 } 00:32:56.835 ] 00:32:56.835 }' 00:32:56.835 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:56.835 11:15:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:57.403 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:57.403 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:57.403 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:57.403 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:57.403 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:57.403 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:57.403 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:57.662 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:57.662 "name": "raid_bdev1", 00:32:57.662 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:57.662 "strip_size_kb": 0, 00:32:57.662 "state": "online", 00:32:57.662 "raid_level": "raid1", 00:32:57.662 "superblock": true, 00:32:57.662 "num_base_bdevs": 2, 00:32:57.662 "num_base_bdevs_discovered": 2, 00:32:57.662 "num_base_bdevs_operational": 2, 00:32:57.662 "base_bdevs_list": [ 00:32:57.662 { 00:32:57.662 "name": "spare", 00:32:57.662 "uuid": "c4c3948b-84e8-577b-abd8-9566b330234d", 00:32:57.662 "is_configured": true, 00:32:57.662 "data_offset": 256, 00:32:57.662 "data_size": 7936 00:32:57.662 }, 00:32:57.662 { 00:32:57.662 "name": "BaseBdev2", 00:32:57.662 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:57.662 "is_configured": true, 00:32:57.662 "data_offset": 256, 00:32:57.662 "data_size": 7936 00:32:57.662 } 00:32:57.662 ] 00:32:57.662 }' 00:32:57.662 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:57.662 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:57.662 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:57.662 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:57.662 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:57.662 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:57.921 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:32:57.921 11:15:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:58.490 [2024-07-25 11:15:05.414329] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:58.490 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:58.490 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:58.490 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:58.490 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:58.490 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:58.490 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:58.490 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:58.490 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:58.490 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:58.490 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:58.490 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:58.490 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.750 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:58.750 "name": "raid_bdev1", 00:32:58.750 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:32:58.750 "strip_size_kb": 0, 00:32:58.750 "state": "online", 00:32:58.750 "raid_level": "raid1", 00:32:58.750 "superblock": true, 00:32:58.750 "num_base_bdevs": 2, 00:32:58.750 "num_base_bdevs_discovered": 1, 00:32:58.750 "num_base_bdevs_operational": 1, 00:32:58.750 "base_bdevs_list": [ 00:32:58.750 { 00:32:58.750 "name": null, 00:32:58.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.750 "is_configured": false, 00:32:58.750 "data_offset": 256, 00:32:58.750 "data_size": 7936 00:32:58.750 }, 00:32:58.750 { 00:32:58.750 "name": "BaseBdev2", 00:32:58.750 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:32:58.750 "is_configured": true, 00:32:58.750 "data_offset": 256, 00:32:58.750 "data_size": 7936 00:32:58.750 } 00:32:58.750 ] 00:32:58.750 }' 00:32:58.750 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:58.750 11:15:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:59.318 11:15:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:59.578 [2024-07-25 11:15:06.453321] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:59.578 [2024-07-25 11:15:06.453520] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:59.578 [2024-07-25 11:15:06.453545] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:59.578 [2024-07-25 11:15:06.453582] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:59.578 [2024-07-25 11:15:06.475711] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9460 00:32:59.578 [2024-07-25 11:15:06.478029] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:59.578 11:15:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # sleep 1 00:33:00.515 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:00.515 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:00.515 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:00.515 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:00.515 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:00.515 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:00.515 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.775 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:00.775 "name": "raid_bdev1", 00:33:00.775 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:33:00.775 "strip_size_kb": 0, 00:33:00.775 "state": "online", 00:33:00.775 "raid_level": "raid1", 00:33:00.775 "superblock": true, 00:33:00.775 "num_base_bdevs": 2, 00:33:00.775 "num_base_bdevs_discovered": 2, 00:33:00.775 "num_base_bdevs_operational": 2, 00:33:00.775 "process": { 00:33:00.775 "type": "rebuild", 00:33:00.775 "target": "spare", 00:33:00.775 "progress": { 00:33:00.775 "blocks": 3072, 00:33:00.775 "percent": 38 00:33:00.775 } 00:33:00.775 }, 00:33:00.775 "base_bdevs_list": [ 00:33:00.775 { 00:33:00.775 "name": "spare", 00:33:00.775 "uuid": "c4c3948b-84e8-577b-abd8-9566b330234d", 00:33:00.775 "is_configured": true, 00:33:00.775 "data_offset": 256, 00:33:00.775 "data_size": 7936 00:33:00.775 }, 00:33:00.775 { 00:33:00.775 "name": "BaseBdev2", 00:33:00.775 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:33:00.775 "is_configured": true, 00:33:00.775 "data_offset": 256, 00:33:00.775 "data_size": 7936 00:33:00.775 } 00:33:00.775 ] 00:33:00.775 }' 00:33:00.775 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:00.775 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:00.775 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:00.775 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:00.775 11:15:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:01.034 [2024-07-25 11:15:08.031670] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:01.034 [2024-07-25 11:15:08.091218] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:01.034 [2024-07-25 11:15:08.091280] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:01.034 [2024-07-25 11:15:08.091302] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:01.034 [2024-07-25 11:15:08.091316] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:01.034 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:01.034 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:01.034 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:01.034 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:01.034 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:01.034 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:01.034 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:01.034 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:01.034 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:01.034 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:01.034 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:01.035 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:01.294 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:01.294 "name": "raid_bdev1", 00:33:01.294 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:33:01.294 "strip_size_kb": 0, 00:33:01.294 "state": "online", 00:33:01.294 "raid_level": "raid1", 00:33:01.294 "superblock": true, 00:33:01.294 "num_base_bdevs": 2, 00:33:01.294 "num_base_bdevs_discovered": 1, 00:33:01.294 "num_base_bdevs_operational": 1, 00:33:01.294 "base_bdevs_list": [ 00:33:01.294 { 00:33:01.294 "name": null, 00:33:01.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:01.294 "is_configured": false, 00:33:01.294 "data_offset": 256, 00:33:01.294 "data_size": 7936 00:33:01.294 }, 00:33:01.294 { 00:33:01.294 "name": "BaseBdev2", 00:33:01.294 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:33:01.294 "is_configured": true, 00:33:01.294 "data_offset": 256, 00:33:01.294 "data_size": 7936 00:33:01.294 } 00:33:01.294 ] 00:33:01.294 }' 00:33:01.294 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:01.294 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:01.862 11:15:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:02.121 [2024-07-25 11:15:09.153741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:02.121 [2024-07-25 11:15:09.153812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:02.121 [2024-07-25 11:15:09.153844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043880 00:33:02.121 [2024-07-25 11:15:09.153863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:02.121 [2024-07-25 11:15:09.154195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:02.121 [2024-07-25 11:15:09.154221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:02.121 [2024-07-25 11:15:09.154292] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:02.121 [2024-07-25 11:15:09.154311] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:02.121 [2024-07-25 11:15:09.154333] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:02.121 [2024-07-25 11:15:09.154367] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:02.121 [2024-07-25 11:15:09.178143] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c9530 00:33:02.121 spare 00:33:02.121 [2024-07-25 11:15:09.180447] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:02.121 11:15:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # sleep 1 00:33:03.499 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:03.499 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:03.499 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:03.499 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:03.499 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:03.499 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.499 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.499 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:03.499 "name": "raid_bdev1", 00:33:03.499 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:33:03.499 "strip_size_kb": 0, 00:33:03.499 "state": "online", 00:33:03.499 "raid_level": "raid1", 00:33:03.499 "superblock": true, 00:33:03.499 "num_base_bdevs": 2, 00:33:03.499 "num_base_bdevs_discovered": 2, 00:33:03.499 "num_base_bdevs_operational": 2, 00:33:03.499 "process": { 00:33:03.499 "type": "rebuild", 00:33:03.499 "target": "spare", 00:33:03.499 "progress": { 00:33:03.499 "blocks": 3072, 00:33:03.499 "percent": 38 00:33:03.499 } 00:33:03.499 }, 00:33:03.500 "base_bdevs_list": [ 00:33:03.500 { 00:33:03.500 "name": "spare", 00:33:03.500 "uuid": "c4c3948b-84e8-577b-abd8-9566b330234d", 00:33:03.500 "is_configured": true, 00:33:03.500 "data_offset": 256, 00:33:03.500 "data_size": 7936 00:33:03.500 }, 00:33:03.500 { 00:33:03.500 "name": "BaseBdev2", 00:33:03.500 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:33:03.500 "is_configured": true, 00:33:03.500 "data_offset": 256, 00:33:03.500 "data_size": 7936 00:33:03.500 } 00:33:03.500 ] 00:33:03.500 }' 00:33:03.500 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:03.500 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:03.500 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:03.500 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:03.500 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:03.759 [2024-07-25 11:15:10.725895] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:03.759 [2024-07-25 11:15:10.793511] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:03.759 [2024-07-25 11:15:10.793574] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:03.759 [2024-07-25 11:15:10.793598] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:03.759 [2024-07-25 11:15:10.793610] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:03.759 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:03.759 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:03.759 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:03.759 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:03.759 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:03.759 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:03.759 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:03.759 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:03.759 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:03.759 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:03.759 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.759 11:15:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.019 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:04.019 "name": "raid_bdev1", 00:33:04.019 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:33:04.019 "strip_size_kb": 0, 00:33:04.019 "state": "online", 00:33:04.019 "raid_level": "raid1", 00:33:04.019 "superblock": true, 00:33:04.019 "num_base_bdevs": 2, 00:33:04.019 "num_base_bdevs_discovered": 1, 00:33:04.019 "num_base_bdevs_operational": 1, 00:33:04.019 "base_bdevs_list": [ 00:33:04.019 { 00:33:04.019 "name": null, 00:33:04.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.019 "is_configured": false, 00:33:04.019 "data_offset": 256, 00:33:04.019 "data_size": 7936 00:33:04.019 }, 00:33:04.019 { 00:33:04.019 "name": "BaseBdev2", 00:33:04.019 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:33:04.019 "is_configured": true, 00:33:04.019 "data_offset": 256, 00:33:04.019 "data_size": 7936 00:33:04.019 } 00:33:04.019 ] 00:33:04.019 }' 00:33:04.019 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:04.019 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:04.587 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:04.587 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:04.587 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:04.587 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:04.587 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:04.587 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.587 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.846 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:04.846 "name": "raid_bdev1", 00:33:04.846 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:33:04.846 "strip_size_kb": 0, 00:33:04.846 "state": "online", 00:33:04.846 "raid_level": "raid1", 00:33:04.846 "superblock": true, 00:33:04.846 "num_base_bdevs": 2, 00:33:04.846 "num_base_bdevs_discovered": 1, 00:33:04.846 "num_base_bdevs_operational": 1, 00:33:04.846 "base_bdevs_list": [ 00:33:04.846 { 00:33:04.846 "name": null, 00:33:04.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.846 "is_configured": false, 00:33:04.846 "data_offset": 256, 00:33:04.846 "data_size": 7936 00:33:04.846 }, 00:33:04.846 { 00:33:04.846 "name": "BaseBdev2", 00:33:04.846 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:33:04.846 "is_configured": true, 00:33:04.846 "data_offset": 256, 00:33:04.846 "data_size": 7936 00:33:04.846 } 00:33:04.846 ] 00:33:04.846 }' 00:33:04.846 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:04.846 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:04.846 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:04.846 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:04.846 11:15:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:05.105 11:15:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:05.363 [2024-07-25 11:15:12.378329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:05.363 [2024-07-25 11:15:12.378392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:05.363 [2024-07-25 11:15:12.378423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043e80 00:33:05.363 [2024-07-25 11:15:12.378439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:05.363 [2024-07-25 11:15:12.378753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:05.363 [2024-07-25 11:15:12.378773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:05.363 [2024-07-25 11:15:12.378839] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:05.363 [2024-07-25 11:15:12.378857] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:05.363 [2024-07-25 11:15:12.378877] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:05.363 BaseBdev1 00:33:05.363 11:15:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@789 -- # sleep 1 00:33:06.300 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:06.300 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:06.300 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:06.300 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:06.300 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:06.300 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:06.300 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:06.300 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:06.300 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:06.300 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:06.300 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.300 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:06.558 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:06.558 "name": "raid_bdev1", 00:33:06.558 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:33:06.558 "strip_size_kb": 0, 00:33:06.558 "state": "online", 00:33:06.558 "raid_level": "raid1", 00:33:06.558 "superblock": true, 00:33:06.558 "num_base_bdevs": 2, 00:33:06.558 "num_base_bdevs_discovered": 1, 00:33:06.558 "num_base_bdevs_operational": 1, 00:33:06.558 "base_bdevs_list": [ 00:33:06.558 { 00:33:06.558 "name": null, 00:33:06.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:06.558 "is_configured": false, 00:33:06.558 "data_offset": 256, 00:33:06.558 "data_size": 7936 00:33:06.558 }, 00:33:06.558 { 00:33:06.558 "name": "BaseBdev2", 00:33:06.558 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:33:06.558 "is_configured": true, 00:33:06.558 "data_offset": 256, 00:33:06.558 "data_size": 7936 00:33:06.558 } 00:33:06.558 ] 00:33:06.558 }' 00:33:06.558 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:06.558 11:15:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:07.176 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:07.176 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:07.176 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:07.176 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:07.176 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:07.176 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:07.176 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:07.445 "name": "raid_bdev1", 00:33:07.445 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:33:07.445 "strip_size_kb": 0, 00:33:07.445 "state": "online", 00:33:07.445 "raid_level": "raid1", 00:33:07.445 "superblock": true, 00:33:07.445 "num_base_bdevs": 2, 00:33:07.445 "num_base_bdevs_discovered": 1, 00:33:07.445 "num_base_bdevs_operational": 1, 00:33:07.445 "base_bdevs_list": [ 00:33:07.445 { 00:33:07.445 "name": null, 00:33:07.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:07.445 "is_configured": false, 00:33:07.445 "data_offset": 256, 00:33:07.445 "data_size": 7936 00:33:07.445 }, 00:33:07.445 { 00:33:07.445 "name": "BaseBdev2", 00:33:07.445 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:33:07.445 "is_configured": true, 00:33:07.445 "data_offset": 256, 00:33:07.445 "data_size": 7936 00:33:07.445 } 00:33:07.445 ] 00:33:07.445 }' 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:33:07.445 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:07.703 [2024-07-25 11:15:14.740764] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:07.703 [2024-07-25 11:15:14.740926] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:07.703 [2024-07-25 11:15:14.740946] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:07.703 request: 00:33:07.703 { 00:33:07.703 "base_bdev": "BaseBdev1", 00:33:07.703 "raid_bdev": "raid_bdev1", 00:33:07.703 "method": "bdev_raid_add_base_bdev", 00:33:07.703 "req_id": 1 00:33:07.703 } 00:33:07.703 Got JSON-RPC error response 00:33:07.703 response: 00:33:07.703 { 00:33:07.703 "code": -22, 00:33:07.703 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:07.703 } 00:33:07.703 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:33:07.703 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:07.703 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:07.703 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:07.703 11:15:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@793 -- # sleep 1 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:09.073 "name": "raid_bdev1", 00:33:09.073 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:33:09.073 "strip_size_kb": 0, 00:33:09.073 "state": "online", 00:33:09.073 "raid_level": "raid1", 00:33:09.073 "superblock": true, 00:33:09.073 "num_base_bdevs": 2, 00:33:09.073 "num_base_bdevs_discovered": 1, 00:33:09.073 "num_base_bdevs_operational": 1, 00:33:09.073 "base_bdevs_list": [ 00:33:09.073 { 00:33:09.073 "name": null, 00:33:09.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.073 "is_configured": false, 00:33:09.073 "data_offset": 256, 00:33:09.073 "data_size": 7936 00:33:09.073 }, 00:33:09.073 { 00:33:09.073 "name": "BaseBdev2", 00:33:09.073 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:33:09.073 "is_configured": true, 00:33:09.073 "data_offset": 256, 00:33:09.073 "data_size": 7936 00:33:09.073 } 00:33:09.073 ] 00:33:09.073 }' 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:09.073 11:15:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:09.638 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:09.638 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:09.638 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:09.638 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:09.638 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:09.639 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:09.639 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:09.897 "name": "raid_bdev1", 00:33:09.897 "uuid": "97c8891e-2ed4-4a72-b86e-1942973354cc", 00:33:09.897 "strip_size_kb": 0, 00:33:09.897 "state": "online", 00:33:09.897 "raid_level": "raid1", 00:33:09.897 "superblock": true, 00:33:09.897 "num_base_bdevs": 2, 00:33:09.897 "num_base_bdevs_discovered": 1, 00:33:09.897 "num_base_bdevs_operational": 1, 00:33:09.897 "base_bdevs_list": [ 00:33:09.897 { 00:33:09.897 "name": null, 00:33:09.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.897 "is_configured": false, 00:33:09.897 "data_offset": 256, 00:33:09.897 "data_size": 7936 00:33:09.897 }, 00:33:09.897 { 00:33:09.897 "name": "BaseBdev2", 00:33:09.897 "uuid": "bebeb9c1-de09-5fa8-ac37-ac78e3b0a65b", 00:33:09.897 "is_configured": true, 00:33:09.897 "data_offset": 256, 00:33:09.897 "data_size": 7936 00:33:09.897 } 00:33:09.897 ] 00:33:09.897 }' 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@798 -- # killprocess 3753240 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 3753240 ']' 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 3753240 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3753240 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3753240' 00:33:09.897 killing process with pid 3753240 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 3753240 00:33:09.897 Received shutdown signal, test time was about 60.000000 seconds 00:33:09.897 00:33:09.897 Latency(us) 00:33:09.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:09.897 =================================================================================================================== 00:33:09.897 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:09.897 [2024-07-25 11:15:16.950756] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:09.897 11:15:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 3753240 00:33:09.897 [2024-07-25 11:15:16.950893] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:09.897 [2024-07-25 11:15:16.950955] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:09.897 [2024-07-25 11:15:16.950971] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:33:10.464 [2024-07-25 11:15:17.398273] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:12.367 11:15:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@800 -- # return 0 00:33:12.367 00:33:12.367 real 0m32.949s 00:33:12.367 user 0m49.458s 00:33:12.367 sys 0m5.070s 00:33:12.367 11:15:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:12.367 11:15:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.367 ************************************ 00:33:12.367 END TEST raid_rebuild_test_sb_md_separate 00:33:12.367 ************************************ 00:33:12.367 11:15:19 bdev_raid -- bdev/bdev_raid.sh@991 -- # base_malloc_params='-m 32 -i' 00:33:12.367 11:15:19 bdev_raid -- bdev/bdev_raid.sh@992 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:33:12.367 11:15:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:33:12.367 11:15:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:12.367 11:15:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:12.367 ************************************ 00:33:12.367 START TEST raid_state_function_test_sb_md_interleaved 00:33:12.367 ************************************ 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=3759756 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 3759756' 00:33:12.367 Process raid pid: 3759756 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 3759756 /var/tmp/spdk-raid.sock 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 3759756 ']' 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:12.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:12.367 11:15:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:12.367 [2024-07-25 11:15:19.270750] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:12.367 [2024-07-25 11:15:19.270867] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.367 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:12.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:12.368 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:12.368 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:12.368 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:12.368 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:12.368 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:12.368 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:12.368 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:12.368 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:12.368 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:12.368 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:12.368 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:12.368 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:12.368 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:12.626 [2024-07-25 11:15:19.494909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.884 [2024-07-25 11:15:19.768810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.142 [2024-07-25 11:15:20.108213] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:13.142 [2024-07-25 11:15:20.108251] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:13.400 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:13.400 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:33:13.400 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:13.400 [2024-07-25 11:15:20.510203] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:13.400 [2024-07-25 11:15:20.510257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:13.400 [2024-07-25 11:15:20.510272] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:13.400 [2024-07-25 11:15:20.510289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:13.658 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:13.658 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:13.658 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:13.659 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:13.659 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:13.659 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:13.659 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:13.659 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:13.659 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:13.659 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:13.659 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:13.659 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:13.659 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:13.659 "name": "Existed_Raid", 00:33:13.659 "uuid": "bd354776-408b-4f29-bb7a-4b0a98ede99a", 00:33:13.659 "strip_size_kb": 0, 00:33:13.659 "state": "configuring", 00:33:13.659 "raid_level": "raid1", 00:33:13.659 "superblock": true, 00:33:13.659 "num_base_bdevs": 2, 00:33:13.659 "num_base_bdevs_discovered": 0, 00:33:13.659 "num_base_bdevs_operational": 2, 00:33:13.659 "base_bdevs_list": [ 00:33:13.659 { 00:33:13.659 "name": "BaseBdev1", 00:33:13.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.659 "is_configured": false, 00:33:13.659 "data_offset": 0, 00:33:13.659 "data_size": 0 00:33:13.659 }, 00:33:13.659 { 00:33:13.659 "name": "BaseBdev2", 00:33:13.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:13.659 "is_configured": false, 00:33:13.659 "data_offset": 0, 00:33:13.659 "data_size": 0 00:33:13.659 } 00:33:13.659 ] 00:33:13.659 }' 00:33:13.659 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:13.659 11:15:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:14.223 11:15:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:14.480 [2024-07-25 11:15:21.524763] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:14.480 [2024-07-25 11:15:21.524805] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:14.480 11:15:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:14.737 [2024-07-25 11:15:21.753420] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:14.737 [2024-07-25 11:15:21.753463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:14.737 [2024-07-25 11:15:21.753477] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:14.737 [2024-07-25 11:15:21.753494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:14.737 11:15:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:33:14.995 [2024-07-25 11:15:22.039568] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:14.995 BaseBdev1 00:33:14.995 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:33:14.995 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:33:14.995 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:14.995 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:33:14.995 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:14.995 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:14.995 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:15.252 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:15.510 [ 00:33:15.510 { 00:33:15.510 "name": "BaseBdev1", 00:33:15.510 "aliases": [ 00:33:15.510 "4a312dbe-63c7-42c5-930f-b81d02224215" 00:33:15.510 ], 00:33:15.510 "product_name": "Malloc disk", 00:33:15.510 "block_size": 4128, 00:33:15.510 "num_blocks": 8192, 00:33:15.510 "uuid": "4a312dbe-63c7-42c5-930f-b81d02224215", 00:33:15.510 "md_size": 32, 00:33:15.510 "md_interleave": true, 00:33:15.510 "dif_type": 0, 00:33:15.510 "assigned_rate_limits": { 00:33:15.510 "rw_ios_per_sec": 0, 00:33:15.511 "rw_mbytes_per_sec": 0, 00:33:15.511 "r_mbytes_per_sec": 0, 00:33:15.511 "w_mbytes_per_sec": 0 00:33:15.511 }, 00:33:15.511 "claimed": true, 00:33:15.511 "claim_type": "exclusive_write", 00:33:15.511 "zoned": false, 00:33:15.511 "supported_io_types": { 00:33:15.511 "read": true, 00:33:15.511 "write": true, 00:33:15.511 "unmap": true, 00:33:15.511 "flush": true, 00:33:15.511 "reset": true, 00:33:15.511 "nvme_admin": false, 00:33:15.511 "nvme_io": false, 00:33:15.511 "nvme_io_md": false, 00:33:15.511 "write_zeroes": true, 00:33:15.511 "zcopy": true, 00:33:15.511 "get_zone_info": false, 00:33:15.511 "zone_management": false, 00:33:15.511 "zone_append": false, 00:33:15.511 "compare": false, 00:33:15.511 "compare_and_write": false, 00:33:15.511 "abort": true, 00:33:15.511 "seek_hole": false, 00:33:15.511 "seek_data": false, 00:33:15.511 "copy": true, 00:33:15.511 "nvme_iov_md": false 00:33:15.511 }, 00:33:15.511 "memory_domains": [ 00:33:15.511 { 00:33:15.511 "dma_device_id": "system", 00:33:15.511 "dma_device_type": 1 00:33:15.511 }, 00:33:15.511 { 00:33:15.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:15.511 "dma_device_type": 2 00:33:15.511 } 00:33:15.511 ], 00:33:15.511 "driver_specific": {} 00:33:15.511 } 00:33:15.511 ] 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:15.511 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:15.769 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:15.769 "name": "Existed_Raid", 00:33:15.769 "uuid": "9e6fcd9a-df77-49d5-a6e4-e6e584706f6d", 00:33:15.769 "strip_size_kb": 0, 00:33:15.769 "state": "configuring", 00:33:15.769 "raid_level": "raid1", 00:33:15.769 "superblock": true, 00:33:15.769 "num_base_bdevs": 2, 00:33:15.769 "num_base_bdevs_discovered": 1, 00:33:15.769 "num_base_bdevs_operational": 2, 00:33:15.769 "base_bdevs_list": [ 00:33:15.769 { 00:33:15.769 "name": "BaseBdev1", 00:33:15.769 "uuid": "4a312dbe-63c7-42c5-930f-b81d02224215", 00:33:15.769 "is_configured": true, 00:33:15.769 "data_offset": 256, 00:33:15.769 "data_size": 7936 00:33:15.769 }, 00:33:15.769 { 00:33:15.769 "name": "BaseBdev2", 00:33:15.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:15.769 "is_configured": false, 00:33:15.769 "data_offset": 0, 00:33:15.769 "data_size": 0 00:33:15.769 } 00:33:15.769 ] 00:33:15.769 }' 00:33:15.769 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:15.769 11:15:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:16.333 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:16.591 [2024-07-25 11:15:23.515716] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:16.591 [2024-07-25 11:15:23.515770] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:16.591 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:16.849 [2024-07-25 11:15:23.744402] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:16.849 [2024-07-25 11:15:23.746702] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:16.849 [2024-07-25 11:15:23.746746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.849 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:17.107 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:17.107 "name": "Existed_Raid", 00:33:17.107 "uuid": "55b67789-aff9-4772-9253-3e737d6987db", 00:33:17.107 "strip_size_kb": 0, 00:33:17.107 "state": "configuring", 00:33:17.107 "raid_level": "raid1", 00:33:17.107 "superblock": true, 00:33:17.107 "num_base_bdevs": 2, 00:33:17.107 "num_base_bdevs_discovered": 1, 00:33:17.107 "num_base_bdevs_operational": 2, 00:33:17.107 "base_bdevs_list": [ 00:33:17.107 { 00:33:17.107 "name": "BaseBdev1", 00:33:17.107 "uuid": "4a312dbe-63c7-42c5-930f-b81d02224215", 00:33:17.107 "is_configured": true, 00:33:17.107 "data_offset": 256, 00:33:17.107 "data_size": 7936 00:33:17.107 }, 00:33:17.107 { 00:33:17.107 "name": "BaseBdev2", 00:33:17.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:17.107 "is_configured": false, 00:33:17.107 "data_offset": 0, 00:33:17.107 "data_size": 0 00:33:17.107 } 00:33:17.107 ] 00:33:17.107 }' 00:33:17.107 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:17.107 11:15:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:17.672 11:15:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:33:17.930 [2024-07-25 11:15:24.854230] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:17.930 [2024-07-25 11:15:24.854446] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:17.930 [2024-07-25 11:15:24.854465] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:17.930 [2024-07-25 11:15:24.854566] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:33:17.930 [2024-07-25 11:15:24.854695] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:17.930 [2024-07-25 11:15:24.854713] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:17.930 [2024-07-25 11:15:24.854799] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:17.930 BaseBdev2 00:33:17.930 11:15:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:33:17.930 11:15:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:33:17.930 11:15:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:33:17.930 11:15:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:33:17.930 11:15:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:33:17.930 11:15:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:33:17.930 11:15:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:18.188 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:18.445 [ 00:33:18.445 { 00:33:18.445 "name": "BaseBdev2", 00:33:18.445 "aliases": [ 00:33:18.445 "823fe150-2b3d-4111-97c3-6e28a7881fca" 00:33:18.445 ], 00:33:18.445 "product_name": "Malloc disk", 00:33:18.445 "block_size": 4128, 00:33:18.445 "num_blocks": 8192, 00:33:18.445 "uuid": "823fe150-2b3d-4111-97c3-6e28a7881fca", 00:33:18.445 "md_size": 32, 00:33:18.445 "md_interleave": true, 00:33:18.445 "dif_type": 0, 00:33:18.445 "assigned_rate_limits": { 00:33:18.445 "rw_ios_per_sec": 0, 00:33:18.445 "rw_mbytes_per_sec": 0, 00:33:18.445 "r_mbytes_per_sec": 0, 00:33:18.445 "w_mbytes_per_sec": 0 00:33:18.445 }, 00:33:18.445 "claimed": true, 00:33:18.445 "claim_type": "exclusive_write", 00:33:18.445 "zoned": false, 00:33:18.445 "supported_io_types": { 00:33:18.445 "read": true, 00:33:18.445 "write": true, 00:33:18.445 "unmap": true, 00:33:18.445 "flush": true, 00:33:18.445 "reset": true, 00:33:18.445 "nvme_admin": false, 00:33:18.445 "nvme_io": false, 00:33:18.445 "nvme_io_md": false, 00:33:18.445 "write_zeroes": true, 00:33:18.445 "zcopy": true, 00:33:18.445 "get_zone_info": false, 00:33:18.445 "zone_management": false, 00:33:18.445 "zone_append": false, 00:33:18.445 "compare": false, 00:33:18.445 "compare_and_write": false, 00:33:18.445 "abort": true, 00:33:18.445 "seek_hole": false, 00:33:18.445 "seek_data": false, 00:33:18.445 "copy": true, 00:33:18.445 "nvme_iov_md": false 00:33:18.445 }, 00:33:18.445 "memory_domains": [ 00:33:18.445 { 00:33:18.445 "dma_device_id": "system", 00:33:18.445 "dma_device_type": 1 00:33:18.445 }, 00:33:18.445 { 00:33:18.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:18.445 "dma_device_type": 2 00:33:18.446 } 00:33:18.446 ], 00:33:18.446 "driver_specific": {} 00:33:18.446 } 00:33:18.446 ] 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:18.446 "name": "Existed_Raid", 00:33:18.446 "uuid": "55b67789-aff9-4772-9253-3e737d6987db", 00:33:18.446 "strip_size_kb": 0, 00:33:18.446 "state": "online", 00:33:18.446 "raid_level": "raid1", 00:33:18.446 "superblock": true, 00:33:18.446 "num_base_bdevs": 2, 00:33:18.446 "num_base_bdevs_discovered": 2, 00:33:18.446 "num_base_bdevs_operational": 2, 00:33:18.446 "base_bdevs_list": [ 00:33:18.446 { 00:33:18.446 "name": "BaseBdev1", 00:33:18.446 "uuid": "4a312dbe-63c7-42c5-930f-b81d02224215", 00:33:18.446 "is_configured": true, 00:33:18.446 "data_offset": 256, 00:33:18.446 "data_size": 7936 00:33:18.446 }, 00:33:18.446 { 00:33:18.446 "name": "BaseBdev2", 00:33:18.446 "uuid": "823fe150-2b3d-4111-97c3-6e28a7881fca", 00:33:18.446 "is_configured": true, 00:33:18.446 "data_offset": 256, 00:33:18.446 "data_size": 7936 00:33:18.446 } 00:33:18.446 ] 00:33:18.446 }' 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:18.446 11:15:25 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:19.011 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:33:19.011 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:19.011 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:19.011 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:19.011 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:19.011 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:19.011 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:19.011 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:19.269 [2024-07-25 11:15:26.310598] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:19.269 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:19.269 "name": "Existed_Raid", 00:33:19.269 "aliases": [ 00:33:19.269 "55b67789-aff9-4772-9253-3e737d6987db" 00:33:19.269 ], 00:33:19.269 "product_name": "Raid Volume", 00:33:19.269 "block_size": 4128, 00:33:19.269 "num_blocks": 7936, 00:33:19.269 "uuid": "55b67789-aff9-4772-9253-3e737d6987db", 00:33:19.269 "md_size": 32, 00:33:19.269 "md_interleave": true, 00:33:19.269 "dif_type": 0, 00:33:19.269 "assigned_rate_limits": { 00:33:19.269 "rw_ios_per_sec": 0, 00:33:19.269 "rw_mbytes_per_sec": 0, 00:33:19.269 "r_mbytes_per_sec": 0, 00:33:19.269 "w_mbytes_per_sec": 0 00:33:19.269 }, 00:33:19.269 "claimed": false, 00:33:19.269 "zoned": false, 00:33:19.269 "supported_io_types": { 00:33:19.269 "read": true, 00:33:19.269 "write": true, 00:33:19.269 "unmap": false, 00:33:19.269 "flush": false, 00:33:19.269 "reset": true, 00:33:19.269 "nvme_admin": false, 00:33:19.269 "nvme_io": false, 00:33:19.269 "nvme_io_md": false, 00:33:19.269 "write_zeroes": true, 00:33:19.269 "zcopy": false, 00:33:19.269 "get_zone_info": false, 00:33:19.269 "zone_management": false, 00:33:19.269 "zone_append": false, 00:33:19.269 "compare": false, 00:33:19.269 "compare_and_write": false, 00:33:19.269 "abort": false, 00:33:19.269 "seek_hole": false, 00:33:19.269 "seek_data": false, 00:33:19.269 "copy": false, 00:33:19.269 "nvme_iov_md": false 00:33:19.269 }, 00:33:19.269 "memory_domains": [ 00:33:19.269 { 00:33:19.269 "dma_device_id": "system", 00:33:19.269 "dma_device_type": 1 00:33:19.269 }, 00:33:19.269 { 00:33:19.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.269 "dma_device_type": 2 00:33:19.269 }, 00:33:19.269 { 00:33:19.269 "dma_device_id": "system", 00:33:19.269 "dma_device_type": 1 00:33:19.269 }, 00:33:19.269 { 00:33:19.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.269 "dma_device_type": 2 00:33:19.269 } 00:33:19.269 ], 00:33:19.269 "driver_specific": { 00:33:19.269 "raid": { 00:33:19.269 "uuid": "55b67789-aff9-4772-9253-3e737d6987db", 00:33:19.269 "strip_size_kb": 0, 00:33:19.269 "state": "online", 00:33:19.269 "raid_level": "raid1", 00:33:19.269 "superblock": true, 00:33:19.269 "num_base_bdevs": 2, 00:33:19.269 "num_base_bdevs_discovered": 2, 00:33:19.269 "num_base_bdevs_operational": 2, 00:33:19.269 "base_bdevs_list": [ 00:33:19.269 { 00:33:19.269 "name": "BaseBdev1", 00:33:19.269 "uuid": "4a312dbe-63c7-42c5-930f-b81d02224215", 00:33:19.269 "is_configured": true, 00:33:19.269 "data_offset": 256, 00:33:19.269 "data_size": 7936 00:33:19.269 }, 00:33:19.269 { 00:33:19.269 "name": "BaseBdev2", 00:33:19.269 "uuid": "823fe150-2b3d-4111-97c3-6e28a7881fca", 00:33:19.269 "is_configured": true, 00:33:19.269 "data_offset": 256, 00:33:19.269 "data_size": 7936 00:33:19.269 } 00:33:19.269 ] 00:33:19.269 } 00:33:19.269 } 00:33:19.269 }' 00:33:19.269 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:19.269 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:33:19.269 BaseBdev2' 00:33:19.269 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:19.269 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:19.269 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:19.527 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:19.527 "name": "BaseBdev1", 00:33:19.527 "aliases": [ 00:33:19.527 "4a312dbe-63c7-42c5-930f-b81d02224215" 00:33:19.527 ], 00:33:19.527 "product_name": "Malloc disk", 00:33:19.527 "block_size": 4128, 00:33:19.527 "num_blocks": 8192, 00:33:19.527 "uuid": "4a312dbe-63c7-42c5-930f-b81d02224215", 00:33:19.527 "md_size": 32, 00:33:19.527 "md_interleave": true, 00:33:19.527 "dif_type": 0, 00:33:19.527 "assigned_rate_limits": { 00:33:19.527 "rw_ios_per_sec": 0, 00:33:19.527 "rw_mbytes_per_sec": 0, 00:33:19.527 "r_mbytes_per_sec": 0, 00:33:19.527 "w_mbytes_per_sec": 0 00:33:19.527 }, 00:33:19.527 "claimed": true, 00:33:19.527 "claim_type": "exclusive_write", 00:33:19.527 "zoned": false, 00:33:19.527 "supported_io_types": { 00:33:19.527 "read": true, 00:33:19.527 "write": true, 00:33:19.527 "unmap": true, 00:33:19.527 "flush": true, 00:33:19.527 "reset": true, 00:33:19.527 "nvme_admin": false, 00:33:19.527 "nvme_io": false, 00:33:19.527 "nvme_io_md": false, 00:33:19.527 "write_zeroes": true, 00:33:19.527 "zcopy": true, 00:33:19.527 "get_zone_info": false, 00:33:19.527 "zone_management": false, 00:33:19.527 "zone_append": false, 00:33:19.527 "compare": false, 00:33:19.527 "compare_and_write": false, 00:33:19.527 "abort": true, 00:33:19.527 "seek_hole": false, 00:33:19.527 "seek_data": false, 00:33:19.527 "copy": true, 00:33:19.527 "nvme_iov_md": false 00:33:19.527 }, 00:33:19.527 "memory_domains": [ 00:33:19.527 { 00:33:19.527 "dma_device_id": "system", 00:33:19.527 "dma_device_type": 1 00:33:19.527 }, 00:33:19.527 { 00:33:19.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.527 "dma_device_type": 2 00:33:19.527 } 00:33:19.527 ], 00:33:19.527 "driver_specific": {} 00:33:19.527 }' 00:33:19.527 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:19.527 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:19.784 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:19.784 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:19.784 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:19.784 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:19.784 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:19.784 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:19.784 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:19.784 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:19.784 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:20.047 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:20.047 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:20.047 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:20.047 11:15:26 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:20.047 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:20.047 "name": "BaseBdev2", 00:33:20.047 "aliases": [ 00:33:20.047 "823fe150-2b3d-4111-97c3-6e28a7881fca" 00:33:20.047 ], 00:33:20.047 "product_name": "Malloc disk", 00:33:20.047 "block_size": 4128, 00:33:20.047 "num_blocks": 8192, 00:33:20.047 "uuid": "823fe150-2b3d-4111-97c3-6e28a7881fca", 00:33:20.047 "md_size": 32, 00:33:20.047 "md_interleave": true, 00:33:20.047 "dif_type": 0, 00:33:20.047 "assigned_rate_limits": { 00:33:20.047 "rw_ios_per_sec": 0, 00:33:20.047 "rw_mbytes_per_sec": 0, 00:33:20.047 "r_mbytes_per_sec": 0, 00:33:20.047 "w_mbytes_per_sec": 0 00:33:20.047 }, 00:33:20.047 "claimed": true, 00:33:20.047 "claim_type": "exclusive_write", 00:33:20.047 "zoned": false, 00:33:20.047 "supported_io_types": { 00:33:20.047 "read": true, 00:33:20.047 "write": true, 00:33:20.047 "unmap": true, 00:33:20.047 "flush": true, 00:33:20.047 "reset": true, 00:33:20.047 "nvme_admin": false, 00:33:20.047 "nvme_io": false, 00:33:20.047 "nvme_io_md": false, 00:33:20.047 "write_zeroes": true, 00:33:20.047 "zcopy": true, 00:33:20.047 "get_zone_info": false, 00:33:20.047 "zone_management": false, 00:33:20.047 "zone_append": false, 00:33:20.047 "compare": false, 00:33:20.047 "compare_and_write": false, 00:33:20.047 "abort": true, 00:33:20.047 "seek_hole": false, 00:33:20.047 "seek_data": false, 00:33:20.047 "copy": true, 00:33:20.047 "nvme_iov_md": false 00:33:20.047 }, 00:33:20.047 "memory_domains": [ 00:33:20.047 { 00:33:20.047 "dma_device_id": "system", 00:33:20.047 "dma_device_type": 1 00:33:20.047 }, 00:33:20.047 { 00:33:20.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:20.047 "dma_device_type": 2 00:33:20.047 } 00:33:20.047 ], 00:33:20.047 "driver_specific": {} 00:33:20.047 }' 00:33:20.048 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:20.306 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:20.306 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:20.306 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:20.306 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:20.306 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:20.306 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:20.306 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:20.564 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:20.564 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:20.564 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:20.564 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:20.564 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:20.825 [2024-07-25 11:15:27.714135] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.825 11:15:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:21.105 11:15:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:21.105 "name": "Existed_Raid", 00:33:21.105 "uuid": "55b67789-aff9-4772-9253-3e737d6987db", 00:33:21.105 "strip_size_kb": 0, 00:33:21.105 "state": "online", 00:33:21.105 "raid_level": "raid1", 00:33:21.105 "superblock": true, 00:33:21.105 "num_base_bdevs": 2, 00:33:21.105 "num_base_bdevs_discovered": 1, 00:33:21.105 "num_base_bdevs_operational": 1, 00:33:21.105 "base_bdevs_list": [ 00:33:21.105 { 00:33:21.105 "name": null, 00:33:21.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.105 "is_configured": false, 00:33:21.105 "data_offset": 256, 00:33:21.105 "data_size": 7936 00:33:21.105 }, 00:33:21.105 { 00:33:21.105 "name": "BaseBdev2", 00:33:21.105 "uuid": "823fe150-2b3d-4111-97c3-6e28a7881fca", 00:33:21.105 "is_configured": true, 00:33:21.105 "data_offset": 256, 00:33:21.105 "data_size": 7936 00:33:21.105 } 00:33:21.105 ] 00:33:21.105 }' 00:33:21.105 11:15:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:21.105 11:15:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:21.685 11:15:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:33:21.685 11:15:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:21.685 11:15:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.685 11:15:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:21.685 11:15:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:21.685 11:15:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:21.685 11:15:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:21.943 [2024-07-25 11:15:28.972277] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:21.943 [2024-07-25 11:15:28.972392] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:22.201 [2024-07-25 11:15:29.108066] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:22.201 [2024-07-25 11:15:29.108117] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:22.201 [2024-07-25 11:15:29.108136] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:22.201 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:22.201 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:22.201 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:22.201 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 3759756 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 3759756 ']' 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 3759756 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3759756 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3759756' 00:33:22.460 killing process with pid 3759756 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 3759756 00:33:22.460 [2024-07-25 11:15:29.416013] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:22.460 11:15:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 3759756 00:33:22.460 [2024-07-25 11:15:29.440258] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:24.361 11:15:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:33:24.361 00:33:24.361 real 0m11.960s 00:33:24.361 user 0m19.543s 00:33:24.361 sys 0m2.073s 00:33:24.361 11:15:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:24.361 11:15:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:24.361 ************************************ 00:33:24.361 END TEST raid_state_function_test_sb_md_interleaved 00:33:24.361 ************************************ 00:33:24.361 11:15:31 bdev_raid -- bdev/bdev_raid.sh@993 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:33:24.361 11:15:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:33:24.361 11:15:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:24.361 11:15:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:24.361 ************************************ 00:33:24.361 START TEST raid_superblock_test_md_interleaved 00:33:24.361 ************************************ 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # local raid_level=raid1 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@409 -- # local num_base_bdevs=2 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # base_bdevs_malloc=() 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # local base_bdevs_malloc 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # base_bdevs_pt=() 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # local base_bdevs_pt 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # base_bdevs_pt_uuid=() 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # local base_bdevs_pt_uuid 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # local raid_bdev_name=raid_bdev1 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@414 -- # local strip_size 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # local strip_size_create_arg 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local raid_bdev_uuid 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local raid_bdev 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # '[' raid1 '!=' raid1 ']' 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # strip_size=0 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@427 -- # raid_pid=3761872 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@428 -- # waitforlisten 3761872 /var/tmp/spdk-raid.sock 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 3761872 ']' 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:24.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:24.361 11:15:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:24.361 [2024-07-25 11:15:31.316738] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:24.361 [2024-07-25 11:15:31.316855] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3761872 ] 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:24.361 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.361 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:24.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.362 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:24.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.362 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:24.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.362 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:24.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.362 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:24.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.362 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:24.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.362 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:24.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.362 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:24.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.362 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:24.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.362 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:24.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.362 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:24.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.362 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:24.362 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:24.362 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:24.619 [2024-07-25 11:15:31.538706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.877 [2024-07-25 11:15:31.821513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.134 [2024-07-25 11:15:32.168338] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:25.134 [2024-07-25 11:15:32.168379] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:25.392 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:25.392 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:33:25.392 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i = 1 )) 00:33:25.392 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:33:25.392 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc1 00:33:25.392 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt1 00:33:25.392 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:25.392 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:25.392 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:33:25.392 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:25.392 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:33:25.650 malloc1 00:33:25.650 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:25.908 [2024-07-25 11:15:32.865731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:25.908 [2024-07-25 11:15:32.865795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:25.909 [2024-07-25 11:15:32.865826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:33:25.909 [2024-07-25 11:15:32.865842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:25.909 [2024-07-25 11:15:32.868272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:25.909 [2024-07-25 11:15:32.868306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:25.909 pt1 00:33:25.909 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:33:25.909 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:33:25.909 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # local bdev_malloc=malloc2 00:33:25.909 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@433 -- # local bdev_pt=pt2 00:33:25.909 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:25.909 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:25.909 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@437 -- # base_bdevs_pt+=($bdev_pt) 00:33:25.909 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@438 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:25.909 11:15:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:33:26.187 malloc2 00:33:26.187 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:26.444 [2024-07-25 11:15:33.379073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:26.444 [2024-07-25 11:15:33.379132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.444 [2024-07-25 11:15:33.379169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:33:26.444 [2024-07-25 11:15:33.379184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.444 [2024-07-25 11:15:33.381595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.444 [2024-07-25 11:15:33.381635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:26.444 pt2 00:33:26.444 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i++ )) 00:33:26.444 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # (( i <= num_base_bdevs )) 00:33:26.444 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@445 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:33:26.702 [2024-07-25 11:15:33.607712] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:26.702 [2024-07-25 11:15:33.610084] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:26.702 [2024-07-25 11:15:33.610315] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:26.702 [2024-07-25 11:15:33.610333] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:26.702 [2024-07-25 11:15:33.610448] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010570 00:33:26.702 [2024-07-25 11:15:33.610565] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:26.702 [2024-07-25 11:15:33.610583] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:26.702 [2024-07-25 11:15:33.610701] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:26.702 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@446 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:26.702 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:26.702 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:26.702 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:26.702 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:26.702 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:26.702 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:26.702 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:26.702 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:26.702 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:26.702 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:26.703 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:26.961 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:26.961 "name": "raid_bdev1", 00:33:26.961 "uuid": "e9068593-da2a-47fd-9e24-c285e420f2ed", 00:33:26.961 "strip_size_kb": 0, 00:33:26.961 "state": "online", 00:33:26.961 "raid_level": "raid1", 00:33:26.961 "superblock": true, 00:33:26.961 "num_base_bdevs": 2, 00:33:26.961 "num_base_bdevs_discovered": 2, 00:33:26.961 "num_base_bdevs_operational": 2, 00:33:26.961 "base_bdevs_list": [ 00:33:26.961 { 00:33:26.961 "name": "pt1", 00:33:26.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:26.961 "is_configured": true, 00:33:26.961 "data_offset": 256, 00:33:26.961 "data_size": 7936 00:33:26.961 }, 00:33:26.961 { 00:33:26.961 "name": "pt2", 00:33:26.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:26.961 "is_configured": true, 00:33:26.961 "data_offset": 256, 00:33:26.961 "data_size": 7936 00:33:26.961 } 00:33:26.961 ] 00:33:26.961 }' 00:33:26.961 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:26.961 11:15:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:27.528 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # verify_raid_bdev_properties raid_bdev1 00:33:27.528 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:27.528 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:27.528 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:27.528 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:27.528 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:27.528 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:27.528 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:27.528 [2024-07-25 11:15:34.638815] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:27.787 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:27.787 "name": "raid_bdev1", 00:33:27.787 "aliases": [ 00:33:27.787 "e9068593-da2a-47fd-9e24-c285e420f2ed" 00:33:27.787 ], 00:33:27.787 "product_name": "Raid Volume", 00:33:27.787 "block_size": 4128, 00:33:27.787 "num_blocks": 7936, 00:33:27.787 "uuid": "e9068593-da2a-47fd-9e24-c285e420f2ed", 00:33:27.787 "md_size": 32, 00:33:27.787 "md_interleave": true, 00:33:27.787 "dif_type": 0, 00:33:27.787 "assigned_rate_limits": { 00:33:27.787 "rw_ios_per_sec": 0, 00:33:27.787 "rw_mbytes_per_sec": 0, 00:33:27.787 "r_mbytes_per_sec": 0, 00:33:27.787 "w_mbytes_per_sec": 0 00:33:27.787 }, 00:33:27.787 "claimed": false, 00:33:27.787 "zoned": false, 00:33:27.787 "supported_io_types": { 00:33:27.787 "read": true, 00:33:27.787 "write": true, 00:33:27.787 "unmap": false, 00:33:27.787 "flush": false, 00:33:27.787 "reset": true, 00:33:27.787 "nvme_admin": false, 00:33:27.787 "nvme_io": false, 00:33:27.787 "nvme_io_md": false, 00:33:27.787 "write_zeroes": true, 00:33:27.787 "zcopy": false, 00:33:27.787 "get_zone_info": false, 00:33:27.787 "zone_management": false, 00:33:27.787 "zone_append": false, 00:33:27.787 "compare": false, 00:33:27.787 "compare_and_write": false, 00:33:27.787 "abort": false, 00:33:27.787 "seek_hole": false, 00:33:27.787 "seek_data": false, 00:33:27.787 "copy": false, 00:33:27.787 "nvme_iov_md": false 00:33:27.787 }, 00:33:27.787 "memory_domains": [ 00:33:27.787 { 00:33:27.787 "dma_device_id": "system", 00:33:27.787 "dma_device_type": 1 00:33:27.787 }, 00:33:27.787 { 00:33:27.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:27.787 "dma_device_type": 2 00:33:27.787 }, 00:33:27.787 { 00:33:27.787 "dma_device_id": "system", 00:33:27.787 "dma_device_type": 1 00:33:27.787 }, 00:33:27.787 { 00:33:27.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:27.787 "dma_device_type": 2 00:33:27.787 } 00:33:27.787 ], 00:33:27.787 "driver_specific": { 00:33:27.787 "raid": { 00:33:27.787 "uuid": "e9068593-da2a-47fd-9e24-c285e420f2ed", 00:33:27.787 "strip_size_kb": 0, 00:33:27.787 "state": "online", 00:33:27.787 "raid_level": "raid1", 00:33:27.787 "superblock": true, 00:33:27.787 "num_base_bdevs": 2, 00:33:27.787 "num_base_bdevs_discovered": 2, 00:33:27.787 "num_base_bdevs_operational": 2, 00:33:27.787 "base_bdevs_list": [ 00:33:27.787 { 00:33:27.787 "name": "pt1", 00:33:27.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:27.787 "is_configured": true, 00:33:27.787 "data_offset": 256, 00:33:27.787 "data_size": 7936 00:33:27.787 }, 00:33:27.787 { 00:33:27.787 "name": "pt2", 00:33:27.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:27.787 "is_configured": true, 00:33:27.787 "data_offset": 256, 00:33:27.787 "data_size": 7936 00:33:27.787 } 00:33:27.787 ] 00:33:27.787 } 00:33:27.787 } 00:33:27.787 }' 00:33:27.787 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:27.787 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:27.787 pt2' 00:33:27.787 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:27.787 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:27.787 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:28.046 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:28.046 "name": "pt1", 00:33:28.046 "aliases": [ 00:33:28.046 "00000000-0000-0000-0000-000000000001" 00:33:28.046 ], 00:33:28.046 "product_name": "passthru", 00:33:28.046 "block_size": 4128, 00:33:28.046 "num_blocks": 8192, 00:33:28.046 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:28.046 "md_size": 32, 00:33:28.046 "md_interleave": true, 00:33:28.046 "dif_type": 0, 00:33:28.046 "assigned_rate_limits": { 00:33:28.046 "rw_ios_per_sec": 0, 00:33:28.046 "rw_mbytes_per_sec": 0, 00:33:28.046 "r_mbytes_per_sec": 0, 00:33:28.046 "w_mbytes_per_sec": 0 00:33:28.046 }, 00:33:28.046 "claimed": true, 00:33:28.046 "claim_type": "exclusive_write", 00:33:28.046 "zoned": false, 00:33:28.046 "supported_io_types": { 00:33:28.046 "read": true, 00:33:28.046 "write": true, 00:33:28.046 "unmap": true, 00:33:28.046 "flush": true, 00:33:28.046 "reset": true, 00:33:28.046 "nvme_admin": false, 00:33:28.046 "nvme_io": false, 00:33:28.046 "nvme_io_md": false, 00:33:28.046 "write_zeroes": true, 00:33:28.046 "zcopy": true, 00:33:28.046 "get_zone_info": false, 00:33:28.046 "zone_management": false, 00:33:28.046 "zone_append": false, 00:33:28.046 "compare": false, 00:33:28.046 "compare_and_write": false, 00:33:28.046 "abort": true, 00:33:28.046 "seek_hole": false, 00:33:28.046 "seek_data": false, 00:33:28.046 "copy": true, 00:33:28.046 "nvme_iov_md": false 00:33:28.046 }, 00:33:28.046 "memory_domains": [ 00:33:28.046 { 00:33:28.046 "dma_device_id": "system", 00:33:28.046 "dma_device_type": 1 00:33:28.046 }, 00:33:28.046 { 00:33:28.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:28.046 "dma_device_type": 2 00:33:28.046 } 00:33:28.046 ], 00:33:28.046 "driver_specific": { 00:33:28.046 "passthru": { 00:33:28.046 "name": "pt1", 00:33:28.046 "base_bdev_name": "malloc1" 00:33:28.046 } 00:33:28.046 } 00:33:28.046 }' 00:33:28.046 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:28.046 11:15:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:28.046 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:28.046 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:28.046 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:28.046 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:28.046 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.046 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.304 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:28.304 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.304 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.304 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:28.304 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:28.304 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:28.304 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:28.562 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:28.562 "name": "pt2", 00:33:28.562 "aliases": [ 00:33:28.562 "00000000-0000-0000-0000-000000000002" 00:33:28.562 ], 00:33:28.563 "product_name": "passthru", 00:33:28.563 "block_size": 4128, 00:33:28.563 "num_blocks": 8192, 00:33:28.563 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:28.563 "md_size": 32, 00:33:28.563 "md_interleave": true, 00:33:28.563 "dif_type": 0, 00:33:28.563 "assigned_rate_limits": { 00:33:28.563 "rw_ios_per_sec": 0, 00:33:28.563 "rw_mbytes_per_sec": 0, 00:33:28.563 "r_mbytes_per_sec": 0, 00:33:28.563 "w_mbytes_per_sec": 0 00:33:28.563 }, 00:33:28.563 "claimed": true, 00:33:28.563 "claim_type": "exclusive_write", 00:33:28.563 "zoned": false, 00:33:28.563 "supported_io_types": { 00:33:28.563 "read": true, 00:33:28.563 "write": true, 00:33:28.563 "unmap": true, 00:33:28.563 "flush": true, 00:33:28.563 "reset": true, 00:33:28.563 "nvme_admin": false, 00:33:28.563 "nvme_io": false, 00:33:28.563 "nvme_io_md": false, 00:33:28.563 "write_zeroes": true, 00:33:28.563 "zcopy": true, 00:33:28.563 "get_zone_info": false, 00:33:28.563 "zone_management": false, 00:33:28.563 "zone_append": false, 00:33:28.563 "compare": false, 00:33:28.563 "compare_and_write": false, 00:33:28.563 "abort": true, 00:33:28.563 "seek_hole": false, 00:33:28.563 "seek_data": false, 00:33:28.563 "copy": true, 00:33:28.563 "nvme_iov_md": false 00:33:28.563 }, 00:33:28.563 "memory_domains": [ 00:33:28.563 { 00:33:28.563 "dma_device_id": "system", 00:33:28.563 "dma_device_type": 1 00:33:28.563 }, 00:33:28.563 { 00:33:28.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:28.563 "dma_device_type": 2 00:33:28.563 } 00:33:28.563 ], 00:33:28.563 "driver_specific": { 00:33:28.563 "passthru": { 00:33:28.563 "name": "pt2", 00:33:28.563 "base_bdev_name": "malloc2" 00:33:28.563 } 00:33:28.563 } 00:33:28.563 }' 00:33:28.563 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:28.563 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:28.563 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:28.563 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:28.563 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:28.821 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:28.821 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.821 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.821 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:28.822 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.822 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.822 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:28.822 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:28.822 11:15:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '.[] | .uuid' 00:33:29.080 [2024-07-25 11:15:36.054645] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:29.080 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # raid_bdev_uuid=e9068593-da2a-47fd-9e24-c285e420f2ed 00:33:29.080 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' -z e9068593-da2a-47fd-9e24-c285e420f2ed ']' 00:33:29.080 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:29.339 [2024-07-25 11:15:36.278912] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:29.339 [2024-07-25 11:15:36.278944] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:29.339 [2024-07-25 11:15:36.279028] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:29.339 [2024-07-25 11:15:36.279098] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:29.339 [2024-07-25 11:15:36.279121] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:29.339 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.339 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # jq -r '.[]' 00:33:29.597 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # raid_bdev= 00:33:29.597 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # '[' -n '' ']' 00:33:29.597 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:33:29.597 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:29.855 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@463 -- # for i in "${base_bdevs_pt[@]}" 00:33:29.855 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:30.113 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:33:30.113 11:15:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:30.113 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@466 -- # '[' false == true ']' 00:33:30.114 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@472 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:30.114 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:33:30.114 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:30.114 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:30.114 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:30.114 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:30.114 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:30.114 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:30.114 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:30.114 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:33:30.114 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:33:30.114 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:33:30.371 [2024-07-25 11:15:37.409915] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:30.371 [2024-07-25 11:15:37.412220] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:30.371 [2024-07-25 11:15:37.412293] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:30.371 [2024-07-25 11:15:37.412350] bdev_raid.c:3219:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:30.371 [2024-07-25 11:15:37.412373] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:30.371 [2024-07-25 11:15:37.412390] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:33:30.371 request: 00:33:30.371 { 00:33:30.371 "name": "raid_bdev1", 00:33:30.371 "raid_level": "raid1", 00:33:30.371 "base_bdevs": [ 00:33:30.371 "malloc1", 00:33:30.371 "malloc2" 00:33:30.371 ], 00:33:30.371 "superblock": false, 00:33:30.371 "method": "bdev_raid_create", 00:33:30.371 "req_id": 1 00:33:30.371 } 00:33:30.371 Got JSON-RPC error response 00:33:30.371 response: 00:33:30.371 { 00:33:30.371 "code": -17, 00:33:30.371 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:30.371 } 00:33:30.371 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:33:30.371 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:30.371 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:30.371 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:30.371 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:30.371 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # jq -r '.[]' 00:33:30.629 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@474 -- # raid_bdev= 00:33:30.629 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@475 -- # '[' -n '' ']' 00:33:30.629 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@480 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:30.888 [2024-07-25 11:15:37.863084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:30.888 [2024-07-25 11:15:37.863162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:30.888 [2024-07-25 11:15:37.863186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040e80 00:33:30.888 [2024-07-25 11:15:37.863204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:30.888 [2024-07-25 11:15:37.865623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:30.888 [2024-07-25 11:15:37.865659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:30.888 [2024-07-25 11:15:37.865718] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:30.888 [2024-07-25 11:15:37.865797] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:30.888 pt1 00:33:30.888 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:33:30.888 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:30.888 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:30.888 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:30.888 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:30.888 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:30.888 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:30.888 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:30.888 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:30.888 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:30.888 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:30.888 11:15:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.146 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:31.146 "name": "raid_bdev1", 00:33:31.146 "uuid": "e9068593-da2a-47fd-9e24-c285e420f2ed", 00:33:31.146 "strip_size_kb": 0, 00:33:31.146 "state": "configuring", 00:33:31.146 "raid_level": "raid1", 00:33:31.146 "superblock": true, 00:33:31.146 "num_base_bdevs": 2, 00:33:31.146 "num_base_bdevs_discovered": 1, 00:33:31.146 "num_base_bdevs_operational": 2, 00:33:31.146 "base_bdevs_list": [ 00:33:31.146 { 00:33:31.146 "name": "pt1", 00:33:31.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:31.146 "is_configured": true, 00:33:31.146 "data_offset": 256, 00:33:31.146 "data_size": 7936 00:33:31.146 }, 00:33:31.146 { 00:33:31.146 "name": null, 00:33:31.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:31.146 "is_configured": false, 00:33:31.146 "data_offset": 256, 00:33:31.146 "data_size": 7936 00:33:31.146 } 00:33:31.146 ] 00:33:31.146 }' 00:33:31.146 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:31.146 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:31.713 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@485 -- # '[' 2 -gt 2 ']' 00:33:31.713 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i = 1 )) 00:33:31.713 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:33:31.713 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@494 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:31.972 [2024-07-25 11:15:38.873820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:31.972 [2024-07-25 11:15:38.873889] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:31.972 [2024-07-25 11:15:38.873915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:33:31.972 [2024-07-25 11:15:38.873933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:31.972 [2024-07-25 11:15:38.874181] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:31.972 [2024-07-25 11:15:38.874206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:31.972 [2024-07-25 11:15:38.874270] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:31.972 [2024-07-25 11:15:38.874310] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:31.972 [2024-07-25 11:15:38.874440] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:31.972 [2024-07-25 11:15:38.874458] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:31.972 [2024-07-25 11:15:38.874539] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:33:31.972 [2024-07-25 11:15:38.874656] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:31.972 [2024-07-25 11:15:38.874670] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:33:31.972 [2024-07-25 11:15:38.874756] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:31.972 pt2 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i++ )) 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # (( i < num_base_bdevs )) 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.972 11:15:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.231 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:32.231 "name": "raid_bdev1", 00:33:32.231 "uuid": "e9068593-da2a-47fd-9e24-c285e420f2ed", 00:33:32.231 "strip_size_kb": 0, 00:33:32.231 "state": "online", 00:33:32.231 "raid_level": "raid1", 00:33:32.231 "superblock": true, 00:33:32.231 "num_base_bdevs": 2, 00:33:32.231 "num_base_bdevs_discovered": 2, 00:33:32.231 "num_base_bdevs_operational": 2, 00:33:32.231 "base_bdevs_list": [ 00:33:32.231 { 00:33:32.231 "name": "pt1", 00:33:32.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:32.231 "is_configured": true, 00:33:32.231 "data_offset": 256, 00:33:32.231 "data_size": 7936 00:33:32.231 }, 00:33:32.231 { 00:33:32.231 "name": "pt2", 00:33:32.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:32.231 "is_configured": true, 00:33:32.231 "data_offset": 256, 00:33:32.231 "data_size": 7936 00:33:32.231 } 00:33:32.231 ] 00:33:32.231 }' 00:33:32.231 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:32.231 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:32.798 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # verify_raid_bdev_properties raid_bdev1 00:33:32.798 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:32.798 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:32.798 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:32.798 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:32.798 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:32.798 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:32.798 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:32.798 [2024-07-25 11:15:39.888880] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:32.798 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:32.798 "name": "raid_bdev1", 00:33:32.798 "aliases": [ 00:33:32.798 "e9068593-da2a-47fd-9e24-c285e420f2ed" 00:33:32.798 ], 00:33:32.798 "product_name": "Raid Volume", 00:33:32.798 "block_size": 4128, 00:33:32.798 "num_blocks": 7936, 00:33:32.798 "uuid": "e9068593-da2a-47fd-9e24-c285e420f2ed", 00:33:32.798 "md_size": 32, 00:33:32.798 "md_interleave": true, 00:33:32.798 "dif_type": 0, 00:33:32.798 "assigned_rate_limits": { 00:33:32.798 "rw_ios_per_sec": 0, 00:33:32.798 "rw_mbytes_per_sec": 0, 00:33:32.798 "r_mbytes_per_sec": 0, 00:33:32.798 "w_mbytes_per_sec": 0 00:33:32.798 }, 00:33:32.798 "claimed": false, 00:33:32.798 "zoned": false, 00:33:32.798 "supported_io_types": { 00:33:32.798 "read": true, 00:33:32.798 "write": true, 00:33:32.798 "unmap": false, 00:33:32.798 "flush": false, 00:33:32.798 "reset": true, 00:33:32.798 "nvme_admin": false, 00:33:32.798 "nvme_io": false, 00:33:32.798 "nvme_io_md": false, 00:33:32.798 "write_zeroes": true, 00:33:32.798 "zcopy": false, 00:33:32.798 "get_zone_info": false, 00:33:32.798 "zone_management": false, 00:33:32.798 "zone_append": false, 00:33:32.798 "compare": false, 00:33:32.798 "compare_and_write": false, 00:33:32.798 "abort": false, 00:33:32.798 "seek_hole": false, 00:33:32.798 "seek_data": false, 00:33:32.798 "copy": false, 00:33:32.798 "nvme_iov_md": false 00:33:32.798 }, 00:33:32.798 "memory_domains": [ 00:33:32.798 { 00:33:32.798 "dma_device_id": "system", 00:33:32.798 "dma_device_type": 1 00:33:32.798 }, 00:33:32.799 { 00:33:32.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:32.799 "dma_device_type": 2 00:33:32.799 }, 00:33:32.799 { 00:33:32.799 "dma_device_id": "system", 00:33:32.799 "dma_device_type": 1 00:33:32.799 }, 00:33:32.799 { 00:33:32.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:32.799 "dma_device_type": 2 00:33:32.799 } 00:33:32.799 ], 00:33:32.799 "driver_specific": { 00:33:32.799 "raid": { 00:33:32.799 "uuid": "e9068593-da2a-47fd-9e24-c285e420f2ed", 00:33:32.799 "strip_size_kb": 0, 00:33:32.799 "state": "online", 00:33:32.799 "raid_level": "raid1", 00:33:32.799 "superblock": true, 00:33:32.799 "num_base_bdevs": 2, 00:33:32.799 "num_base_bdevs_discovered": 2, 00:33:32.799 "num_base_bdevs_operational": 2, 00:33:32.799 "base_bdevs_list": [ 00:33:32.799 { 00:33:32.799 "name": "pt1", 00:33:32.799 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:32.799 "is_configured": true, 00:33:32.799 "data_offset": 256, 00:33:32.799 "data_size": 7936 00:33:32.799 }, 00:33:32.799 { 00:33:32.799 "name": "pt2", 00:33:32.799 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:32.799 "is_configured": true, 00:33:32.799 "data_offset": 256, 00:33:32.799 "data_size": 7936 00:33:32.799 } 00:33:32.799 ] 00:33:32.799 } 00:33:32.799 } 00:33:32.799 }' 00:33:32.799 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:33.058 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:33.058 pt2' 00:33:33.058 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:33.058 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:33.058 11:15:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:33.058 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:33.058 "name": "pt1", 00:33:33.058 "aliases": [ 00:33:33.058 "00000000-0000-0000-0000-000000000001" 00:33:33.058 ], 00:33:33.058 "product_name": "passthru", 00:33:33.058 "block_size": 4128, 00:33:33.058 "num_blocks": 8192, 00:33:33.058 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:33.058 "md_size": 32, 00:33:33.058 "md_interleave": true, 00:33:33.058 "dif_type": 0, 00:33:33.058 "assigned_rate_limits": { 00:33:33.058 "rw_ios_per_sec": 0, 00:33:33.058 "rw_mbytes_per_sec": 0, 00:33:33.058 "r_mbytes_per_sec": 0, 00:33:33.058 "w_mbytes_per_sec": 0 00:33:33.058 }, 00:33:33.058 "claimed": true, 00:33:33.058 "claim_type": "exclusive_write", 00:33:33.058 "zoned": false, 00:33:33.058 "supported_io_types": { 00:33:33.058 "read": true, 00:33:33.058 "write": true, 00:33:33.058 "unmap": true, 00:33:33.058 "flush": true, 00:33:33.058 "reset": true, 00:33:33.058 "nvme_admin": false, 00:33:33.058 "nvme_io": false, 00:33:33.058 "nvme_io_md": false, 00:33:33.058 "write_zeroes": true, 00:33:33.058 "zcopy": true, 00:33:33.058 "get_zone_info": false, 00:33:33.058 "zone_management": false, 00:33:33.058 "zone_append": false, 00:33:33.058 "compare": false, 00:33:33.058 "compare_and_write": false, 00:33:33.058 "abort": true, 00:33:33.058 "seek_hole": false, 00:33:33.058 "seek_data": false, 00:33:33.058 "copy": true, 00:33:33.058 "nvme_iov_md": false 00:33:33.058 }, 00:33:33.058 "memory_domains": [ 00:33:33.058 { 00:33:33.058 "dma_device_id": "system", 00:33:33.058 "dma_device_type": 1 00:33:33.058 }, 00:33:33.058 { 00:33:33.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:33.058 "dma_device_type": 2 00:33:33.058 } 00:33:33.058 ], 00:33:33.058 "driver_specific": { 00:33:33.058 "passthru": { 00:33:33.058 "name": "pt1", 00:33:33.058 "base_bdev_name": "malloc1" 00:33:33.058 } 00:33:33.058 } 00:33:33.058 }' 00:33:33.316 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:33.316 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:33.316 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:33.316 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:33.316 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:33.316 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:33.316 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:33.316 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:33.316 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:33.316 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:33.575 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:33.575 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:33.575 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:33.575 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:33.575 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:33.833 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:33.833 "name": "pt2", 00:33:33.833 "aliases": [ 00:33:33.833 "00000000-0000-0000-0000-000000000002" 00:33:33.833 ], 00:33:33.833 "product_name": "passthru", 00:33:33.833 "block_size": 4128, 00:33:33.833 "num_blocks": 8192, 00:33:33.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:33.833 "md_size": 32, 00:33:33.833 "md_interleave": true, 00:33:33.833 "dif_type": 0, 00:33:33.833 "assigned_rate_limits": { 00:33:33.833 "rw_ios_per_sec": 0, 00:33:33.833 "rw_mbytes_per_sec": 0, 00:33:33.833 "r_mbytes_per_sec": 0, 00:33:33.833 "w_mbytes_per_sec": 0 00:33:33.833 }, 00:33:33.833 "claimed": true, 00:33:33.833 "claim_type": "exclusive_write", 00:33:33.833 "zoned": false, 00:33:33.833 "supported_io_types": { 00:33:33.833 "read": true, 00:33:33.833 "write": true, 00:33:33.833 "unmap": true, 00:33:33.833 "flush": true, 00:33:33.833 "reset": true, 00:33:33.833 "nvme_admin": false, 00:33:33.833 "nvme_io": false, 00:33:33.833 "nvme_io_md": false, 00:33:33.833 "write_zeroes": true, 00:33:33.833 "zcopy": true, 00:33:33.833 "get_zone_info": false, 00:33:33.833 "zone_management": false, 00:33:33.833 "zone_append": false, 00:33:33.833 "compare": false, 00:33:33.833 "compare_and_write": false, 00:33:33.833 "abort": true, 00:33:33.833 "seek_hole": false, 00:33:33.833 "seek_data": false, 00:33:33.833 "copy": true, 00:33:33.833 "nvme_iov_md": false 00:33:33.833 }, 00:33:33.833 "memory_domains": [ 00:33:33.833 { 00:33:33.833 "dma_device_id": "system", 00:33:33.833 "dma_device_type": 1 00:33:33.833 }, 00:33:33.833 { 00:33:33.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:33.833 "dma_device_type": 2 00:33:33.833 } 00:33:33.833 ], 00:33:33.833 "driver_specific": { 00:33:33.833 "passthru": { 00:33:33.833 "name": "pt2", 00:33:33.833 "base_bdev_name": "malloc2" 00:33:33.833 } 00:33:33.833 } 00:33:33.833 }' 00:33:33.833 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:33.833 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:33.833 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:33.833 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:33.833 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:33.833 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:33.833 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:33.833 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:34.092 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:34.092 11:15:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:34.092 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:34.092 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:34.092 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:34.092 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # jq -r '.[] | .uuid' 00:33:34.362 [2024-07-25 11:15:41.276644] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:34.362 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@502 -- # '[' e9068593-da2a-47fd-9e24-c285e420f2ed '!=' e9068593-da2a-47fd-9e24-c285e420f2ed ']' 00:33:34.362 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # has_redundancy raid1 00:33:34.362 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:34.362 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:33:34.362 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@508 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:34.629 [2024-07-25 11:15:41.508932] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:34.629 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:34.629 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:34.629 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:34.629 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:34.629 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:34.629 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:34.629 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:34.629 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:34.629 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:34.629 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:34.629 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.629 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.888 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:34.888 "name": "raid_bdev1", 00:33:34.888 "uuid": "e9068593-da2a-47fd-9e24-c285e420f2ed", 00:33:34.888 "strip_size_kb": 0, 00:33:34.888 "state": "online", 00:33:34.888 "raid_level": "raid1", 00:33:34.888 "superblock": true, 00:33:34.888 "num_base_bdevs": 2, 00:33:34.888 "num_base_bdevs_discovered": 1, 00:33:34.888 "num_base_bdevs_operational": 1, 00:33:34.888 "base_bdevs_list": [ 00:33:34.888 { 00:33:34.888 "name": null, 00:33:34.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.888 "is_configured": false, 00:33:34.888 "data_offset": 256, 00:33:34.888 "data_size": 7936 00:33:34.888 }, 00:33:34.888 { 00:33:34.888 "name": "pt2", 00:33:34.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:34.888 "is_configured": true, 00:33:34.888 "data_offset": 256, 00:33:34.888 "data_size": 7936 00:33:34.888 } 00:33:34.888 ] 00:33:34.888 }' 00:33:34.888 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:34.888 11:15:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:35.455 11:15:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@514 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:35.455 [2024-07-25 11:15:42.555735] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:35.455 [2024-07-25 11:15:42.555769] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:35.455 [2024-07-25 11:15:42.555847] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:35.455 [2024-07-25 11:15:42.555906] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:35.455 [2024-07-25 11:15:42.555925] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:33:35.713 11:15:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:35.713 11:15:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # jq -r '.[]' 00:33:35.713 11:15:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@515 -- # raid_bdev= 00:33:35.714 11:15:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@516 -- # '[' -n '' ']' 00:33:35.714 11:15:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:33:35.714 11:15:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:33:35.714 11:15:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:35.972 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:33:35.972 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@521 -- # (( i < num_base_bdevs )) 00:33:35.972 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i = 1 )) 00:33:35.972 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # (( i < num_base_bdevs - 1 )) 00:33:35.972 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@534 -- # i=1 00:33:35.972 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@535 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:36.231 [2024-07-25 11:15:43.249589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:36.231 [2024-07-25 11:15:43.249678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:36.231 [2024-07-25 11:15:43.249702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041780 00:33:36.231 [2024-07-25 11:15:43.249721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:36.231 [2024-07-25 11:15:43.252194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:36.231 [2024-07-25 11:15:43.252230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:36.231 [2024-07-25 11:15:43.252287] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:36.231 [2024-07-25 11:15:43.252355] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:36.231 [2024-07-25 11:15:43.252460] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:36.231 [2024-07-25 11:15:43.252478] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:36.231 [2024-07-25 11:15:43.252557] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:33:36.231 [2024-07-25 11:15:43.252683] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:36.231 [2024-07-25 11:15:43.252696] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:33:36.231 [2024-07-25 11:15:43.252790] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:36.231 pt2 00:33:36.231 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@538 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:36.231 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:36.231 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:36.231 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:36.231 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:36.231 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:36.231 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:36.231 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:36.231 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:36.231 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:36.231 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:36.231 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.489 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:36.489 "name": "raid_bdev1", 00:33:36.489 "uuid": "e9068593-da2a-47fd-9e24-c285e420f2ed", 00:33:36.489 "strip_size_kb": 0, 00:33:36.489 "state": "online", 00:33:36.489 "raid_level": "raid1", 00:33:36.489 "superblock": true, 00:33:36.489 "num_base_bdevs": 2, 00:33:36.489 "num_base_bdevs_discovered": 1, 00:33:36.489 "num_base_bdevs_operational": 1, 00:33:36.489 "base_bdevs_list": [ 00:33:36.489 { 00:33:36.489 "name": null, 00:33:36.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:36.489 "is_configured": false, 00:33:36.489 "data_offset": 256, 00:33:36.489 "data_size": 7936 00:33:36.489 }, 00:33:36.489 { 00:33:36.489 "name": "pt2", 00:33:36.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:36.489 "is_configured": true, 00:33:36.489 "data_offset": 256, 00:33:36.489 "data_size": 7936 00:33:36.489 } 00:33:36.489 ] 00:33:36.489 }' 00:33:36.489 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:36.489 11:15:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:37.055 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:37.314 [2024-07-25 11:15:44.264323] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:37.314 [2024-07-25 11:15:44.264359] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:37.314 [2024-07-25 11:15:44.264437] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:37.314 [2024-07-25 11:15:44.264500] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:37.314 [2024-07-25 11:15:44.264516] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:33:37.314 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:37.314 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # jq -r '.[]' 00:33:37.572 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # raid_bdev= 00:33:37.572 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@543 -- # '[' -n '' ']' 00:33:37.572 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@547 -- # '[' 2 -gt 2 ']' 00:33:37.572 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:37.831 [2024-07-25 11:15:44.721499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:37.831 [2024-07-25 11:15:44.721557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:37.831 [2024-07-25 11:15:44.721582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041d80 00:33:37.831 [2024-07-25 11:15:44.721598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:37.831 [2024-07-25 11:15:44.724043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:37.831 [2024-07-25 11:15:44.724074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:37.831 [2024-07-25 11:15:44.724133] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:37.831 [2024-07-25 11:15:44.724232] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:37.831 [2024-07-25 11:15:44.724380] bdev_raid.c:3665:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:37.831 [2024-07-25 11:15:44.724397] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:37.831 [2024-07-25 11:15:44.724424] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:33:37.831 [2024-07-25 11:15:44.724515] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:37.831 [2024-07-25 11:15:44.724598] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:33:37.831 [2024-07-25 11:15:44.724612] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:37.831 [2024-07-25 11:15:44.724683] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:33:37.831 [2024-07-25 11:15:44.724797] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:33:37.831 [2024-07-25 11:15:44.724813] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:33:37.831 [2024-07-25 11:15:44.724905] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:37.831 pt1 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 2 -gt 2 ']' 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@569 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:37.831 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.090 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:38.090 "name": "raid_bdev1", 00:33:38.090 "uuid": "e9068593-da2a-47fd-9e24-c285e420f2ed", 00:33:38.090 "strip_size_kb": 0, 00:33:38.090 "state": "online", 00:33:38.090 "raid_level": "raid1", 00:33:38.090 "superblock": true, 00:33:38.090 "num_base_bdevs": 2, 00:33:38.090 "num_base_bdevs_discovered": 1, 00:33:38.090 "num_base_bdevs_operational": 1, 00:33:38.090 "base_bdevs_list": [ 00:33:38.090 { 00:33:38.090 "name": null, 00:33:38.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:38.090 "is_configured": false, 00:33:38.090 "data_offset": 256, 00:33:38.090 "data_size": 7936 00:33:38.090 }, 00:33:38.090 { 00:33:38.090 "name": "pt2", 00:33:38.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:38.090 "is_configured": true, 00:33:38.090 "data_offset": 256, 00:33:38.090 "data_size": 7936 00:33:38.090 } 00:33:38.090 ] 00:33:38.090 }' 00:33:38.090 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:38.090 11:15:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:38.657 11:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:33:38.657 11:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:38.657 11:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@570 -- # [[ false == \f\a\l\s\e ]] 00:33:38.657 11:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:38.657 11:15:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # jq -r '.[] | .uuid' 00:33:38.915 [2024-07-25 11:15:45.981262] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:38.915 11:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@573 -- # '[' e9068593-da2a-47fd-9e24-c285e420f2ed '!=' e9068593-da2a-47fd-9e24-c285e420f2ed ']' 00:33:38.915 11:15:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@578 -- # killprocess 3761872 00:33:38.915 11:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 3761872 ']' 00:33:38.915 11:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 3761872 00:33:38.915 11:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:33:38.915 11:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:38.915 11:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3761872 00:33:39.174 11:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:39.174 11:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:39.174 11:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3761872' 00:33:39.174 killing process with pid 3761872 00:33:39.174 11:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 3761872 00:33:39.174 [2024-07-25 11:15:46.069509] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:39.174 [2024-07-25 11:15:46.069604] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:39.174 [2024-07-25 11:15:46.069662] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:39.174 11:15:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 3761872 00:33:39.174 [2024-07-25 11:15:46.069681] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:33:39.174 [2024-07-25 11:15:46.265348] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:41.077 11:15:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@580 -- # return 0 00:33:41.077 00:33:41.077 real 0m16.779s 00:33:41.077 user 0m28.508s 00:33:41.077 sys 0m2.957s 00:33:41.077 11:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:41.077 11:15:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.077 ************************************ 00:33:41.077 END TEST raid_superblock_test_md_interleaved 00:33:41.077 ************************************ 00:33:41.077 11:15:48 bdev_raid -- bdev/bdev_raid.sh@994 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:33:41.077 11:15:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:33:41.077 11:15:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:41.077 11:15:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:41.077 ************************************ 00:33:41.077 START TEST raid_rebuild_test_sb_md_interleaved 00:33:41.077 ************************************ 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@584 -- # local raid_level=raid1 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@585 -- # local num_base_bdevs=2 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@586 -- # local superblock=true 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@587 -- # local background_io=false 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # local verify=false 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i = 1 )) 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev1 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # echo BaseBdev2 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i++ )) 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # (( i <= num_base_bdevs )) 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # local base_bdevs 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@590 -- # local raid_bdev_name=raid_bdev1 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # local strip_size 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # local create_arg 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # local raid_bdev_size 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@594 -- # local data_offset 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # '[' raid1 '!=' raid1 ']' 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@604 -- # strip_size=0 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # '[' true = true ']' 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # create_arg+=' -s' 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # raid_pid=3764921 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # waitforlisten 3764921 /var/tmp/spdk-raid.sock 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 3764921 ']' 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:41.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:41.077 11:15:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:41.077 [2024-07-25 11:15:48.193181] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:41.077 [2024-07-25 11:15:48.193307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3764921 ] 00:33:41.077 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:41.077 Zero copy mechanism will not be used. 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:01.0 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:01.1 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:01.2 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:01.3 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:01.4 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:01.5 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:01.6 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:01.7 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:02.0 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:02.1 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:02.2 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:02.3 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:02.4 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:02.5 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:02.6 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3d:02.7 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:01.0 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:01.1 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:01.2 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:01.3 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:01.4 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:01.5 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:01.6 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:01.7 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:02.0 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:02.1 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:02.2 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:02.3 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:02.4 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:02.5 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:02.6 cannot be used 00:33:41.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:33:41.336 EAL: Requested device 0000:3f:02.7 cannot be used 00:33:41.336 [2024-07-25 11:15:48.416166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.595 [2024-07-25 11:15:48.694416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.161 [2024-07-25 11:15:49.029330] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:42.161 [2024-07-25 11:15:49.029367] bdev_raid.c:1443:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:42.161 11:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:42.161 11:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:33:42.161 11:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:33:42.161 11:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:33:42.419 BaseBdev1_malloc 00:33:42.419 11:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:42.677 [2024-07-25 11:15:49.695247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:42.677 [2024-07-25 11:15:49.695311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:42.677 [2024-07-25 11:15:49.695339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:33:42.677 [2024-07-25 11:15:49.695357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:42.677 [2024-07-25 11:15:49.697803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:42.677 [2024-07-25 11:15:49.697840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:42.677 BaseBdev1 00:33:42.677 11:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # for bdev in "${base_bdevs[@]}" 00:33:42.677 11:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@617 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:33:42.935 BaseBdev2_malloc 00:33:42.935 11:15:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:43.193 [2024-07-25 11:15:50.201643] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:43.193 [2024-07-25 11:15:50.201717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:43.193 [2024-07-25 11:15:50.201745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000040280 00:33:43.193 [2024-07-25 11:15:50.201766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:43.194 [2024-07-25 11:15:50.204215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:43.194 [2024-07-25 11:15:50.204251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:43.194 BaseBdev2 00:33:43.194 11:15:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@622 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:33:43.452 spare_malloc 00:33:43.452 11:15:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:43.710 spare_delay 00:33:43.710 11:15:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:43.968 [2024-07-25 11:15:50.932136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:43.968 [2024-07-25 11:15:50.932204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:43.968 [2024-07-25 11:15:50.932233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000041480 00:33:43.968 [2024-07-25 11:15:50.932251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:43.968 [2024-07-25 11:15:50.934720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:43.968 [2024-07-25 11:15:50.934756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:43.968 spare 00:33:43.968 11:15:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@627 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:33:44.227 [2024-07-25 11:15:51.148762] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:44.227 [2024-07-25 11:15:51.151108] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:44.227 [2024-07-25 11:15:51.151328] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:44.227 [2024-07-25 11:15:51.151351] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:44.227 [2024-07-25 11:15:51.151459] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010640 00:33:44.227 [2024-07-25 11:15:51.151597] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:44.227 [2024-07-25 11:15:51.151612] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:44.227 [2024-07-25 11:15:51.151734] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:44.227 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@628 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:44.227 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:44.227 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:44.227 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:44.227 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:44.227 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:44.227 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:44.227 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:44.227 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:44.227 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:44.227 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.227 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.486 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:44.486 "name": "raid_bdev1", 00:33:44.486 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:44.486 "strip_size_kb": 0, 00:33:44.486 "state": "online", 00:33:44.486 "raid_level": "raid1", 00:33:44.486 "superblock": true, 00:33:44.486 "num_base_bdevs": 2, 00:33:44.486 "num_base_bdevs_discovered": 2, 00:33:44.486 "num_base_bdevs_operational": 2, 00:33:44.486 "base_bdevs_list": [ 00:33:44.486 { 00:33:44.486 "name": "BaseBdev1", 00:33:44.486 "uuid": "f24dcff4-ec11-524b-9e2f-3873568bd311", 00:33:44.486 "is_configured": true, 00:33:44.486 "data_offset": 256, 00:33:44.486 "data_size": 7936 00:33:44.486 }, 00:33:44.486 { 00:33:44.486 "name": "BaseBdev2", 00:33:44.486 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:44.486 "is_configured": true, 00:33:44.486 "data_offset": 256, 00:33:44.486 "data_size": 7936 00:33:44.486 } 00:33:44.486 ] 00:33:44.486 }' 00:33:44.486 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:44.486 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:45.052 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:45.052 11:15:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # jq -r '.[].num_blocks' 00:33:45.310 [2024-07-25 11:15:52.179860] bdev_raid.c:1120:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:45.310 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@631 -- # raid_bdev_size=7936 00:33:45.310 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.310 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@634 -- # data_offset=256 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@636 -- # '[' false = true ']' 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # '[' false = true ']' 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:45.569 [2024-07-25 11:15:52.640770] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.569 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:45.827 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:45.828 "name": "raid_bdev1", 00:33:45.828 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:45.828 "strip_size_kb": 0, 00:33:45.828 "state": "online", 00:33:45.828 "raid_level": "raid1", 00:33:45.828 "superblock": true, 00:33:45.828 "num_base_bdevs": 2, 00:33:45.828 "num_base_bdevs_discovered": 1, 00:33:45.828 "num_base_bdevs_operational": 1, 00:33:45.828 "base_bdevs_list": [ 00:33:45.828 { 00:33:45.828 "name": null, 00:33:45.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.828 "is_configured": false, 00:33:45.828 "data_offset": 256, 00:33:45.828 "data_size": 7936 00:33:45.828 }, 00:33:45.828 { 00:33:45.828 "name": "BaseBdev2", 00:33:45.828 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:45.828 "is_configured": true, 00:33:45.828 "data_offset": 256, 00:33:45.828 "data_size": 7936 00:33:45.828 } 00:33:45.828 ] 00:33:45.828 }' 00:33:45.828 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:45.828 11:15:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:46.402 11:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:46.662 [2024-07-25 11:15:53.671541] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:46.662 [2024-07-25 11:15:53.698496] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010710 00:33:46.662 [2024-07-25 11:15:53.700802] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:46.662 11:15:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:48.036 11:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:48.036 11:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:48.036 11:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:48.036 11:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:48.036 11:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:48.036 11:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.036 11:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.036 11:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:48.036 "name": "raid_bdev1", 00:33:48.036 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:48.036 "strip_size_kb": 0, 00:33:48.036 "state": "online", 00:33:48.036 "raid_level": "raid1", 00:33:48.036 "superblock": true, 00:33:48.036 "num_base_bdevs": 2, 00:33:48.036 "num_base_bdevs_discovered": 2, 00:33:48.036 "num_base_bdevs_operational": 2, 00:33:48.036 "process": { 00:33:48.036 "type": "rebuild", 00:33:48.036 "target": "spare", 00:33:48.036 "progress": { 00:33:48.036 "blocks": 3072, 00:33:48.036 "percent": 38 00:33:48.036 } 00:33:48.036 }, 00:33:48.036 "base_bdevs_list": [ 00:33:48.036 { 00:33:48.036 "name": "spare", 00:33:48.036 "uuid": "c5dcb24b-45fb-5042-b13e-569805ddddd8", 00:33:48.036 "is_configured": true, 00:33:48.036 "data_offset": 256, 00:33:48.036 "data_size": 7936 00:33:48.036 }, 00:33:48.036 { 00:33:48.036 "name": "BaseBdev2", 00:33:48.036 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:48.036 "is_configured": true, 00:33:48.036 "data_offset": 256, 00:33:48.036 "data_size": 7936 00:33:48.036 } 00:33:48.036 ] 00:33:48.036 }' 00:33:48.036 11:15:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:48.037 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:48.037 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:48.037 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:48.037 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@668 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:48.328 [2024-07-25 11:15:55.257746] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:48.328 [2024-07-25 11:15:55.313882] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:48.328 [2024-07-25 11:15:55.313941] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:48.328 [2024-07-25 11:15:55.313961] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:48.328 [2024-07-25 11:15:55.313976] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:48.328 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@671 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:48.328 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:48.328 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:48.328 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:48.328 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:48.328 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:48.328 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:48.328 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:48.328 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:48.328 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:48.328 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.328 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:48.593 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:48.593 "name": "raid_bdev1", 00:33:48.593 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:48.593 "strip_size_kb": 0, 00:33:48.593 "state": "online", 00:33:48.593 "raid_level": "raid1", 00:33:48.593 "superblock": true, 00:33:48.593 "num_base_bdevs": 2, 00:33:48.593 "num_base_bdevs_discovered": 1, 00:33:48.593 "num_base_bdevs_operational": 1, 00:33:48.593 "base_bdevs_list": [ 00:33:48.593 { 00:33:48.593 "name": null, 00:33:48.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:48.593 "is_configured": false, 00:33:48.593 "data_offset": 256, 00:33:48.593 "data_size": 7936 00:33:48.593 }, 00:33:48.593 { 00:33:48.593 "name": "BaseBdev2", 00:33:48.593 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:48.593 "is_configured": true, 00:33:48.593 "data_offset": 256, 00:33:48.593 "data_size": 7936 00:33:48.593 } 00:33:48.593 ] 00:33:48.593 }' 00:33:48.593 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:48.593 11:15:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:49.158 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@674 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:49.158 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:49.158 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:49.158 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:49.158 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:49.158 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:49.158 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:49.417 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:49.417 "name": "raid_bdev1", 00:33:49.417 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:49.417 "strip_size_kb": 0, 00:33:49.417 "state": "online", 00:33:49.417 "raid_level": "raid1", 00:33:49.417 "superblock": true, 00:33:49.417 "num_base_bdevs": 2, 00:33:49.417 "num_base_bdevs_discovered": 1, 00:33:49.417 "num_base_bdevs_operational": 1, 00:33:49.417 "base_bdevs_list": [ 00:33:49.417 { 00:33:49.417 "name": null, 00:33:49.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.417 "is_configured": false, 00:33:49.417 "data_offset": 256, 00:33:49.417 "data_size": 7936 00:33:49.417 }, 00:33:49.417 { 00:33:49.417 "name": "BaseBdev2", 00:33:49.417 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:49.417 "is_configured": true, 00:33:49.417 "data_offset": 256, 00:33:49.417 "data_size": 7936 00:33:49.417 } 00:33:49.417 ] 00:33:49.417 }' 00:33:49.417 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:49.417 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:49.417 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:49.417 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:49.417 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@677 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:49.674 [2024-07-25 11:15:56.687927] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:49.674 [2024-07-25 11:15:56.713610] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000107e0 00:33:49.674 [2024-07-25 11:15:56.715915] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:49.674 11:15:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@678 -- # sleep 1 00:33:51.048 11:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@679 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:51.048 11:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:51.048 11:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:51.048 11:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:51.048 11:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:51.048 11:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.048 11:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:51.048 11:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:51.048 "name": "raid_bdev1", 00:33:51.048 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:51.048 "strip_size_kb": 0, 00:33:51.048 "state": "online", 00:33:51.048 "raid_level": "raid1", 00:33:51.048 "superblock": true, 00:33:51.048 "num_base_bdevs": 2, 00:33:51.048 "num_base_bdevs_discovered": 2, 00:33:51.048 "num_base_bdevs_operational": 2, 00:33:51.048 "process": { 00:33:51.048 "type": "rebuild", 00:33:51.048 "target": "spare", 00:33:51.048 "progress": { 00:33:51.048 "blocks": 3072, 00:33:51.048 "percent": 38 00:33:51.048 } 00:33:51.048 }, 00:33:51.048 "base_bdevs_list": [ 00:33:51.048 { 00:33:51.048 "name": "spare", 00:33:51.048 "uuid": "c5dcb24b-45fb-5042-b13e-569805ddddd8", 00:33:51.048 "is_configured": true, 00:33:51.048 "data_offset": 256, 00:33:51.048 "data_size": 7936 00:33:51.048 }, 00:33:51.048 { 00:33:51.048 "name": "BaseBdev2", 00:33:51.048 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:51.048 "is_configured": true, 00:33:51.048 "data_offset": 256, 00:33:51.048 "data_size": 7936 00:33:51.048 } 00:33:51.048 ] 00:33:51.048 }' 00:33:51.048 11:15:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' true = true ']' 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@681 -- # '[' = false ']' 00:33:51.048 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 681: [: =: unary operator expected 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local num_base_bdevs_operational=2 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' raid1 = raid1 ']' 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # '[' 2 -gt 2 ']' 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # local timeout=1242 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:51.048 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:51.307 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:51.307 "name": "raid_bdev1", 00:33:51.307 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:51.307 "strip_size_kb": 0, 00:33:51.307 "state": "online", 00:33:51.307 "raid_level": "raid1", 00:33:51.307 "superblock": true, 00:33:51.307 "num_base_bdevs": 2, 00:33:51.307 "num_base_bdevs_discovered": 2, 00:33:51.307 "num_base_bdevs_operational": 2, 00:33:51.307 "process": { 00:33:51.307 "type": "rebuild", 00:33:51.307 "target": "spare", 00:33:51.307 "progress": { 00:33:51.307 "blocks": 3840, 00:33:51.307 "percent": 48 00:33:51.307 } 00:33:51.307 }, 00:33:51.307 "base_bdevs_list": [ 00:33:51.307 { 00:33:51.307 "name": "spare", 00:33:51.307 "uuid": "c5dcb24b-45fb-5042-b13e-569805ddddd8", 00:33:51.307 "is_configured": true, 00:33:51.307 "data_offset": 256, 00:33:51.307 "data_size": 7936 00:33:51.307 }, 00:33:51.307 { 00:33:51.307 "name": "BaseBdev2", 00:33:51.307 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:51.307 "is_configured": true, 00:33:51.307 "data_offset": 256, 00:33:51.307 "data_size": 7936 00:33:51.307 } 00:33:51.307 ] 00:33:51.307 }' 00:33:51.307 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:51.307 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:51.307 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:51.307 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:51.307 11:15:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:52.679 "name": "raid_bdev1", 00:33:52.679 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:52.679 "strip_size_kb": 0, 00:33:52.679 "state": "online", 00:33:52.679 "raid_level": "raid1", 00:33:52.679 "superblock": true, 00:33:52.679 "num_base_bdevs": 2, 00:33:52.679 "num_base_bdevs_discovered": 2, 00:33:52.679 "num_base_bdevs_operational": 2, 00:33:52.679 "process": { 00:33:52.679 "type": "rebuild", 00:33:52.679 "target": "spare", 00:33:52.679 "progress": { 00:33:52.679 "blocks": 7168, 00:33:52.679 "percent": 90 00:33:52.679 } 00:33:52.679 }, 00:33:52.679 "base_bdevs_list": [ 00:33:52.679 { 00:33:52.679 "name": "spare", 00:33:52.679 "uuid": "c5dcb24b-45fb-5042-b13e-569805ddddd8", 00:33:52.679 "is_configured": true, 00:33:52.679 "data_offset": 256, 00:33:52.679 "data_size": 7936 00:33:52.679 }, 00:33:52.679 { 00:33:52.679 "name": "BaseBdev2", 00:33:52.679 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:52.679 "is_configured": true, 00:33:52.679 "data_offset": 256, 00:33:52.679 "data_size": 7936 00:33:52.679 } 00:33:52.679 ] 00:33:52.679 }' 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:52.679 11:15:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@726 -- # sleep 1 00:33:52.936 [2024-07-25 11:15:59.840869] bdev_raid.c:2886:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:52.936 [2024-07-25 11:15:59.840942] bdev_raid.c:2548:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:52.936 [2024-07-25 11:15:59.841052] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:53.869 11:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # (( SECONDS < timeout )) 00:33:53.869 11:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@723 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:53.869 11:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:53.869 11:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:53.869 11:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:53.869 11:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:53.869 11:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.869 11:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:53.869 11:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:53.869 "name": "raid_bdev1", 00:33:53.869 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:53.869 "strip_size_kb": 0, 00:33:53.869 "state": "online", 00:33:53.869 "raid_level": "raid1", 00:33:53.869 "superblock": true, 00:33:53.869 "num_base_bdevs": 2, 00:33:53.869 "num_base_bdevs_discovered": 2, 00:33:53.869 "num_base_bdevs_operational": 2, 00:33:53.869 "base_bdevs_list": [ 00:33:53.869 { 00:33:53.869 "name": "spare", 00:33:53.869 "uuid": "c5dcb24b-45fb-5042-b13e-569805ddddd8", 00:33:53.869 "is_configured": true, 00:33:53.869 "data_offset": 256, 00:33:53.869 "data_size": 7936 00:33:53.869 }, 00:33:53.869 { 00:33:53.869 "name": "BaseBdev2", 00:33:53.869 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:53.869 "is_configured": true, 00:33:53.869 "data_offset": 256, 00:33:53.869 "data_size": 7936 00:33:53.869 } 00:33:53.869 ] 00:33:53.869 }' 00:33:53.869 11:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:53.869 11:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:53.869 11:16:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:54.126 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:54.126 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@724 -- # break 00:33:54.126 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@730 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:54.126 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:54.126 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:54.127 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:54.127 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:54.127 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.127 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:54.385 "name": "raid_bdev1", 00:33:54.385 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:54.385 "strip_size_kb": 0, 00:33:54.385 "state": "online", 00:33:54.385 "raid_level": "raid1", 00:33:54.385 "superblock": true, 00:33:54.385 "num_base_bdevs": 2, 00:33:54.385 "num_base_bdevs_discovered": 2, 00:33:54.385 "num_base_bdevs_operational": 2, 00:33:54.385 "base_bdevs_list": [ 00:33:54.385 { 00:33:54.385 "name": "spare", 00:33:54.385 "uuid": "c5dcb24b-45fb-5042-b13e-569805ddddd8", 00:33:54.385 "is_configured": true, 00:33:54.385 "data_offset": 256, 00:33:54.385 "data_size": 7936 00:33:54.385 }, 00:33:54.385 { 00:33:54.385 "name": "BaseBdev2", 00:33:54.385 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:54.385 "is_configured": true, 00:33:54.385 "data_offset": 256, 00:33:54.385 "data_size": 7936 00:33:54.385 } 00:33:54.385 ] 00:33:54.385 }' 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@731 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:54.385 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.642 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:54.642 "name": "raid_bdev1", 00:33:54.642 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:54.642 "strip_size_kb": 0, 00:33:54.642 "state": "online", 00:33:54.642 "raid_level": "raid1", 00:33:54.642 "superblock": true, 00:33:54.642 "num_base_bdevs": 2, 00:33:54.642 "num_base_bdevs_discovered": 2, 00:33:54.642 "num_base_bdevs_operational": 2, 00:33:54.642 "base_bdevs_list": [ 00:33:54.642 { 00:33:54.642 "name": "spare", 00:33:54.642 "uuid": "c5dcb24b-45fb-5042-b13e-569805ddddd8", 00:33:54.642 "is_configured": true, 00:33:54.642 "data_offset": 256, 00:33:54.642 "data_size": 7936 00:33:54.642 }, 00:33:54.642 { 00:33:54.642 "name": "BaseBdev2", 00:33:54.642 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:54.642 "is_configured": true, 00:33:54.642 "data_offset": 256, 00:33:54.642 "data_size": 7936 00:33:54.642 } 00:33:54.642 ] 00:33:54.642 }' 00:33:54.642 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:54.642 11:16:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:55.206 11:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@734 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:55.463 [2024-07-25 11:16:02.360235] bdev_raid.c:2398:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:55.463 [2024-07-25 11:16:02.360272] bdev_raid.c:1886:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:55.463 [2024-07-25 11:16:02.360352] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:55.463 [2024-07-25 11:16:02.360434] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:55.463 [2024-07-25 11:16:02.360451] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:55.463 11:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.463 11:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # jq length 00:33:55.720 11:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@735 -- # [[ 0 == 0 ]] 00:33:55.720 11:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@737 -- # '[' false = true ']' 00:33:55.720 11:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # '[' true = true ']' 00:33:55.720 11:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:55.720 11:16:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:55.976 [2024-07-25 11:16:03.050035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:55.976 [2024-07-25 11:16:03.050103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:55.976 [2024-07-25 11:16:03.050131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042380 00:33:55.976 [2024-07-25 11:16:03.050154] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:55.976 [2024-07-25 11:16:03.052673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:55.976 [2024-07-25 11:16:03.052707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:55.976 [2024-07-25 11:16:03.052784] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:55.976 [2024-07-25 11:16:03.052843] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:55.976 [2024-07-25 11:16:03.052980] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:55.976 spare 00:33:55.976 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:55.976 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:55.976 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:55.976 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:55.976 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:55.976 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:55.976 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:55.976 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:55.977 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:55.977 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:55.977 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.977 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:56.233 [2024-07-25 11:16:03.153323] bdev_raid.c:1721:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:33:56.233 [2024-07-25 11:16:03.153376] bdev_raid.c:1722:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:56.234 [2024-07-25 11:16:03.153500] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000108b0 00:33:56.234 [2024-07-25 11:16:03.153682] bdev_raid.c:1751:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:33:56.234 [2024-07-25 11:16:03.153697] bdev_raid.c:1752:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:33:56.234 [2024-07-25 11:16:03.153819] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:56.234 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:56.234 "name": "raid_bdev1", 00:33:56.234 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:56.234 "strip_size_kb": 0, 00:33:56.234 "state": "online", 00:33:56.234 "raid_level": "raid1", 00:33:56.234 "superblock": true, 00:33:56.234 "num_base_bdevs": 2, 00:33:56.234 "num_base_bdevs_discovered": 2, 00:33:56.234 "num_base_bdevs_operational": 2, 00:33:56.234 "base_bdevs_list": [ 00:33:56.234 { 00:33:56.234 "name": "spare", 00:33:56.234 "uuid": "c5dcb24b-45fb-5042-b13e-569805ddddd8", 00:33:56.234 "is_configured": true, 00:33:56.234 "data_offset": 256, 00:33:56.234 "data_size": 7936 00:33:56.234 }, 00:33:56.234 { 00:33:56.234 "name": "BaseBdev2", 00:33:56.234 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:56.234 "is_configured": true, 00:33:56.234 "data_offset": 256, 00:33:56.234 "data_size": 7936 00:33:56.234 } 00:33:56.234 ] 00:33:56.234 }' 00:33:56.234 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:56.234 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:56.796 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:56.796 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:56.796 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:56.796 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:56.796 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:56.796 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:56.796 11:16:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:57.053 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:57.053 "name": "raid_bdev1", 00:33:57.053 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:57.053 "strip_size_kb": 0, 00:33:57.053 "state": "online", 00:33:57.053 "raid_level": "raid1", 00:33:57.053 "superblock": true, 00:33:57.053 "num_base_bdevs": 2, 00:33:57.053 "num_base_bdevs_discovered": 2, 00:33:57.053 "num_base_bdevs_operational": 2, 00:33:57.053 "base_bdevs_list": [ 00:33:57.053 { 00:33:57.053 "name": "spare", 00:33:57.053 "uuid": "c5dcb24b-45fb-5042-b13e-569805ddddd8", 00:33:57.053 "is_configured": true, 00:33:57.053 "data_offset": 256, 00:33:57.053 "data_size": 7936 00:33:57.053 }, 00:33:57.053 { 00:33:57.053 "name": "BaseBdev2", 00:33:57.053 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:57.053 "is_configured": true, 00:33:57.053 "data_offset": 256, 00:33:57.053 "data_size": 7936 00:33:57.053 } 00:33:57.053 ] 00:33:57.053 }' 00:33:57.053 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:57.053 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:57.053 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:57.310 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:57.310 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.310 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:57.310 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # [[ spare == \s\p\a\r\e ]] 00:33:57.310 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:57.566 [2024-07-25 11:16:04.634430] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:57.567 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:57.567 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:57.567 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:57.567 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:57.567 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:57.567 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:57.567 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:57.567 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:57.567 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:57.567 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:57.567 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.567 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:57.823 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:57.823 "name": "raid_bdev1", 00:33:57.823 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:57.823 "strip_size_kb": 0, 00:33:57.823 "state": "online", 00:33:57.823 "raid_level": "raid1", 00:33:57.823 "superblock": true, 00:33:57.823 "num_base_bdevs": 2, 00:33:57.823 "num_base_bdevs_discovered": 1, 00:33:57.823 "num_base_bdevs_operational": 1, 00:33:57.823 "base_bdevs_list": [ 00:33:57.823 { 00:33:57.823 "name": null, 00:33:57.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:57.823 "is_configured": false, 00:33:57.823 "data_offset": 256, 00:33:57.823 "data_size": 7936 00:33:57.823 }, 00:33:57.823 { 00:33:57.823 "name": "BaseBdev2", 00:33:57.823 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:57.823 "is_configured": true, 00:33:57.823 "data_offset": 256, 00:33:57.823 "data_size": 7936 00:33:57.823 } 00:33:57.823 ] 00:33:57.823 }' 00:33:57.823 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:57.823 11:16:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:58.388 11:16:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:58.645 [2024-07-25 11:16:05.645173] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:58.645 [2024-07-25 11:16:05.645373] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:58.645 [2024-07-25 11:16:05.645402] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:58.645 [2024-07-25 11:16:05.645446] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:58.645 [2024-07-25 11:16:05.670483] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010980 00:33:58.645 [2024-07-25 11:16:05.672817] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:58.645 11:16:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # sleep 1 00:33:59.578 11:16:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:59.578 11:16:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:59.578 11:16:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:59.578 11:16:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:59.578 11:16:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:59.836 11:16:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:59.836 11:16:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.836 11:16:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:59.836 "name": "raid_bdev1", 00:33:59.836 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:33:59.836 "strip_size_kb": 0, 00:33:59.836 "state": "online", 00:33:59.836 "raid_level": "raid1", 00:33:59.836 "superblock": true, 00:33:59.836 "num_base_bdevs": 2, 00:33:59.836 "num_base_bdevs_discovered": 2, 00:33:59.836 "num_base_bdevs_operational": 2, 00:33:59.836 "process": { 00:33:59.836 "type": "rebuild", 00:33:59.836 "target": "spare", 00:33:59.836 "progress": { 00:33:59.836 "blocks": 3072, 00:33:59.836 "percent": 38 00:33:59.836 } 00:33:59.836 }, 00:33:59.836 "base_bdevs_list": [ 00:33:59.836 { 00:33:59.836 "name": "spare", 00:33:59.836 "uuid": "c5dcb24b-45fb-5042-b13e-569805ddddd8", 00:33:59.836 "is_configured": true, 00:33:59.836 "data_offset": 256, 00:33:59.836 "data_size": 7936 00:33:59.836 }, 00:33:59.836 { 00:33:59.836 "name": "BaseBdev2", 00:33:59.836 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:33:59.836 "is_configured": true, 00:33:59.836 "data_offset": 256, 00:33:59.836 "data_size": 7936 00:33:59.836 } 00:33:59.836 ] 00:33:59.836 }' 00:33:59.836 11:16:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:00.094 11:16:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:00.094 11:16:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:00.094 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:00.094 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:00.352 [2024-07-25 11:16:07.217863] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:00.352 [2024-07-25 11:16:07.285844] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:00.352 [2024-07-25 11:16:07.285917] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:00.352 [2024-07-25 11:16:07.285939] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:00.352 [2024-07-25 11:16:07.285957] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:00.352 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:00.352 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:00.352 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:00.352 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:00.352 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:00.352 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:00.352 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:00.352 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:00.352 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:00.352 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:00.352 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:00.352 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:00.610 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:00.610 "name": "raid_bdev1", 00:34:00.610 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:34:00.610 "strip_size_kb": 0, 00:34:00.610 "state": "online", 00:34:00.610 "raid_level": "raid1", 00:34:00.610 "superblock": true, 00:34:00.610 "num_base_bdevs": 2, 00:34:00.610 "num_base_bdevs_discovered": 1, 00:34:00.610 "num_base_bdevs_operational": 1, 00:34:00.610 "base_bdevs_list": [ 00:34:00.610 { 00:34:00.610 "name": null, 00:34:00.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:00.610 "is_configured": false, 00:34:00.610 "data_offset": 256, 00:34:00.610 "data_size": 7936 00:34:00.610 }, 00:34:00.610 { 00:34:00.610 "name": "BaseBdev2", 00:34:00.610 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:34:00.610 "is_configured": true, 00:34:00.610 "data_offset": 256, 00:34:00.610 "data_size": 7936 00:34:00.610 } 00:34:00.610 ] 00:34:00.610 }' 00:34:00.610 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:00.611 11:16:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:01.176 11:16:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:01.434 [2024-07-25 11:16:08.341464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:01.434 [2024-07-25 11:16:08.341530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:01.434 [2024-07-25 11:16:08.341560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000042c80 00:34:01.434 [2024-07-25 11:16:08.341578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:01.434 [2024-07-25 11:16:08.341845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:01.434 [2024-07-25 11:16:08.341869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:01.434 [2024-07-25 11:16:08.341941] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:01.434 [2024-07-25 11:16:08.341961] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:01.434 [2024-07-25 11:16:08.341976] bdev_raid.c:3738:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:01.434 [2024-07-25 11:16:08.342005] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:01.434 [2024-07-25 11:16:08.366772] bdev_raid.c: 263:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000010a50 00:34:01.434 spare 00:34:01.434 [2024-07-25 11:16:08.369112] bdev_raid.c:2921:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:01.434 11:16:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # sleep 1 00:34:02.400 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:02.400 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:02.400 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:02.400 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:02.400 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:02.400 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:02.400 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.662 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:02.662 "name": "raid_bdev1", 00:34:02.662 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:34:02.662 "strip_size_kb": 0, 00:34:02.662 "state": "online", 00:34:02.662 "raid_level": "raid1", 00:34:02.662 "superblock": true, 00:34:02.662 "num_base_bdevs": 2, 00:34:02.662 "num_base_bdevs_discovered": 2, 00:34:02.662 "num_base_bdevs_operational": 2, 00:34:02.662 "process": { 00:34:02.662 "type": "rebuild", 00:34:02.662 "target": "spare", 00:34:02.662 "progress": { 00:34:02.662 "blocks": 3072, 00:34:02.662 "percent": 38 00:34:02.662 } 00:34:02.662 }, 00:34:02.662 "base_bdevs_list": [ 00:34:02.662 { 00:34:02.662 "name": "spare", 00:34:02.662 "uuid": "c5dcb24b-45fb-5042-b13e-569805ddddd8", 00:34:02.662 "is_configured": true, 00:34:02.662 "data_offset": 256, 00:34:02.662 "data_size": 7936 00:34:02.662 }, 00:34:02.662 { 00:34:02.662 "name": "BaseBdev2", 00:34:02.662 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:34:02.662 "is_configured": true, 00:34:02.662 "data_offset": 256, 00:34:02.662 "data_size": 7936 00:34:02.662 } 00:34:02.662 ] 00:34:02.662 }' 00:34:02.662 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:02.662 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:02.663 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:02.663 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:02.663 11:16:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:02.921 [2024-07-25 11:16:09.918222] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:02.921 [2024-07-25 11:16:09.982317] bdev_raid.c:2557:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:02.921 [2024-07-25 11:16:09.982374] bdev_raid.c: 343:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:02.921 [2024-07-25 11:16:09.982400] bdev_raid.c:2162:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:02.921 [2024-07-25 11:16:09.982412] bdev_raid.c:2495:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@783 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:03.181 "name": "raid_bdev1", 00:34:03.181 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:34:03.181 "strip_size_kb": 0, 00:34:03.181 "state": "online", 00:34:03.181 "raid_level": "raid1", 00:34:03.181 "superblock": true, 00:34:03.181 "num_base_bdevs": 2, 00:34:03.181 "num_base_bdevs_discovered": 1, 00:34:03.181 "num_base_bdevs_operational": 1, 00:34:03.181 "base_bdevs_list": [ 00:34:03.181 { 00:34:03.181 "name": null, 00:34:03.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:03.181 "is_configured": false, 00:34:03.181 "data_offset": 256, 00:34:03.181 "data_size": 7936 00:34:03.181 }, 00:34:03.181 { 00:34:03.181 "name": "BaseBdev2", 00:34:03.181 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:34:03.181 "is_configured": true, 00:34:03.181 "data_offset": 256, 00:34:03.181 "data_size": 7936 00:34:03.181 } 00:34:03.181 ] 00:34:03.181 }' 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:03.181 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:03.748 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:03.748 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:03.748 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:03.748 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:03.748 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:03.748 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:03.748 11:16:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.007 11:16:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:04.007 "name": "raid_bdev1", 00:34:04.007 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:34:04.007 "strip_size_kb": 0, 00:34:04.007 "state": "online", 00:34:04.007 "raid_level": "raid1", 00:34:04.007 "superblock": true, 00:34:04.007 "num_base_bdevs": 2, 00:34:04.007 "num_base_bdevs_discovered": 1, 00:34:04.007 "num_base_bdevs_operational": 1, 00:34:04.007 "base_bdevs_list": [ 00:34:04.007 { 00:34:04.007 "name": null, 00:34:04.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:04.007 "is_configured": false, 00:34:04.007 "data_offset": 256, 00:34:04.007 "data_size": 7936 00:34:04.007 }, 00:34:04.007 { 00:34:04.007 "name": "BaseBdev2", 00:34:04.007 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:34:04.007 "is_configured": true, 00:34:04.007 "data_offset": 256, 00:34:04.007 "data_size": 7936 00:34:04.007 } 00:34:04.007 ] 00:34:04.007 }' 00:34:04.007 11:16:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:04.007 11:16:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:04.265 11:16:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:04.265 11:16:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:04.265 11:16:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@787 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:34:04.523 11:16:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@788 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:04.523 [2024-07-25 11:16:11.593954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:04.523 [2024-07-25 11:16:11.594015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.523 [2024-07-25 11:16:11.594047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000043280 00:34:04.523 [2024-07-25 11:16:11.594062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.523 [2024-07-25 11:16:11.594304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.523 [2024-07-25 11:16:11.594325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:04.523 [2024-07-25 11:16:11.594386] bdev_raid.c:3875:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:04.523 [2024-07-25 11:16:11.594408] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:04.523 [2024-07-25 11:16:11.594424] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:04.523 BaseBdev1 00:34:04.523 11:16:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@789 -- # sleep 1 00:34:05.898 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@790 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:05.898 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:05.898 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:05.898 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:05.898 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:05.898 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:05.898 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:05.899 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:05.899 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:05.899 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:05.899 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:05.899 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:05.899 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:05.899 "name": "raid_bdev1", 00:34:05.899 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:34:05.899 "strip_size_kb": 0, 00:34:05.899 "state": "online", 00:34:05.899 "raid_level": "raid1", 00:34:05.899 "superblock": true, 00:34:05.899 "num_base_bdevs": 2, 00:34:05.899 "num_base_bdevs_discovered": 1, 00:34:05.899 "num_base_bdevs_operational": 1, 00:34:05.899 "base_bdevs_list": [ 00:34:05.899 { 00:34:05.899 "name": null, 00:34:05.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.899 "is_configured": false, 00:34:05.899 "data_offset": 256, 00:34:05.899 "data_size": 7936 00:34:05.899 }, 00:34:05.899 { 00:34:05.899 "name": "BaseBdev2", 00:34:05.899 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:34:05.899 "is_configured": true, 00:34:05.899 "data_offset": 256, 00:34:05.899 "data_size": 7936 00:34:05.899 } 00:34:05.899 ] 00:34:05.899 }' 00:34:05.899 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:05.899 11:16:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:06.465 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@791 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:06.465 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:06.465 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:06.465 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:06.465 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:06.465 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:06.465 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:06.724 "name": "raid_bdev1", 00:34:06.724 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:34:06.724 "strip_size_kb": 0, 00:34:06.724 "state": "online", 00:34:06.724 "raid_level": "raid1", 00:34:06.724 "superblock": true, 00:34:06.724 "num_base_bdevs": 2, 00:34:06.724 "num_base_bdevs_discovered": 1, 00:34:06.724 "num_base_bdevs_operational": 1, 00:34:06.724 "base_bdevs_list": [ 00:34:06.724 { 00:34:06.724 "name": null, 00:34:06.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:06.724 "is_configured": false, 00:34:06.724 "data_offset": 256, 00:34:06.724 "data_size": 7936 00:34:06.724 }, 00:34:06.724 { 00:34:06.724 "name": "BaseBdev2", 00:34:06.724 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:34:06.724 "is_configured": true, 00:34:06.724 "data_offset": 256, 00:34:06.724 "data_size": 7936 00:34:06.724 } 00:34:06.724 ] 00:34:06.724 }' 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@792 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:34:06.724 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:06.982 [2024-07-25 11:16:13.960447] bdev_raid.c:3312:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:06.982 [2024-07-25 11:16:13.960620] bdev_raid.c:3680:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:06.982 [2024-07-25 11:16:13.960641] bdev_raid.c:3699:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:06.982 request: 00:34:06.982 { 00:34:06.982 "base_bdev": "BaseBdev1", 00:34:06.982 "raid_bdev": "raid_bdev1", 00:34:06.982 "method": "bdev_raid_add_base_bdev", 00:34:06.982 "req_id": 1 00:34:06.982 } 00:34:06.982 Got JSON-RPC error response 00:34:06.982 response: 00:34:06.982 { 00:34:06.982 "code": -22, 00:34:06.982 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:06.982 } 00:34:06.982 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:34:06.982 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:34:06.982 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:34:06.982 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:34:06.982 11:16:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@793 -- # sleep 1 00:34:07.917 11:16:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@794 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:07.917 11:16:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:07.917 11:16:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:07.917 11:16:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:07.917 11:16:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:07.917 11:16:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:07.917 11:16:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:07.917 11:16:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:07.917 11:16:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:07.917 11:16:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:07.917 11:16:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:07.917 11:16:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.176 11:16:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:08.176 "name": "raid_bdev1", 00:34:08.176 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:34:08.176 "strip_size_kb": 0, 00:34:08.176 "state": "online", 00:34:08.176 "raid_level": "raid1", 00:34:08.176 "superblock": true, 00:34:08.176 "num_base_bdevs": 2, 00:34:08.176 "num_base_bdevs_discovered": 1, 00:34:08.176 "num_base_bdevs_operational": 1, 00:34:08.176 "base_bdevs_list": [ 00:34:08.176 { 00:34:08.176 "name": null, 00:34:08.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.176 "is_configured": false, 00:34:08.176 "data_offset": 256, 00:34:08.176 "data_size": 7936 00:34:08.176 }, 00:34:08.176 { 00:34:08.176 "name": "BaseBdev2", 00:34:08.176 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:34:08.176 "is_configured": true, 00:34:08.176 "data_offset": 256, 00:34:08.176 "data_size": 7936 00:34:08.176 } 00:34:08.176 ] 00:34:08.176 }' 00:34:08.176 11:16:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:08.176 11:16:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:08.742 11:16:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@795 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:08.742 11:16:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:08.742 11:16:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:08.742 11:16:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:08.742 11:16:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:08.742 11:16:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:08.742 11:16:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:09.000 11:16:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:09.000 "name": "raid_bdev1", 00:34:09.000 "uuid": "36bfc17e-b20a-4224-bd83-f1559c36844c", 00:34:09.000 "strip_size_kb": 0, 00:34:09.000 "state": "online", 00:34:09.000 "raid_level": "raid1", 00:34:09.000 "superblock": true, 00:34:09.000 "num_base_bdevs": 2, 00:34:09.000 "num_base_bdevs_discovered": 1, 00:34:09.000 "num_base_bdevs_operational": 1, 00:34:09.000 "base_bdevs_list": [ 00:34:09.000 { 00:34:09.000 "name": null, 00:34:09.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.000 "is_configured": false, 00:34:09.000 "data_offset": 256, 00:34:09.000 "data_size": 7936 00:34:09.000 }, 00:34:09.000 { 00:34:09.000 "name": "BaseBdev2", 00:34:09.000 "uuid": "ace9f258-721c-5ad0-ad93-8975b3a88313", 00:34:09.000 "is_configured": true, 00:34:09.000 "data_offset": 256, 00:34:09.000 "data_size": 7936 00:34:09.000 } 00:34:09.000 ] 00:34:09.000 }' 00:34:09.000 11:16:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:09.000 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:09.000 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:09.000 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:09.000 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@798 -- # killprocess 3764921 00:34:09.000 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 3764921 ']' 00:34:09.000 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 3764921 00:34:09.000 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:34:09.000 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:09.000 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3764921 00:34:09.259 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:09.259 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:09.259 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3764921' 00:34:09.259 killing process with pid 3764921 00:34:09.259 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 3764921 00:34:09.259 Received shutdown signal, test time was about 60.000000 seconds 00:34:09.259 00:34:09.259 Latency(us) 00:34:09.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:09.259 =================================================================================================================== 00:34:09.259 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:09.259 [2024-07-25 11:16:16.133664] bdev_raid.c:1374:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:09.259 [2024-07-25 11:16:16.133791] bdev_raid.c: 487:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:09.259 11:16:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 3764921 00:34:09.259 [2024-07-25 11:16:16.133854] bdev_raid.c: 464:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:09.259 [2024-07-25 11:16:16.133870] bdev_raid.c: 378:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:34:09.543 [2024-07-25 11:16:16.450209] bdev_raid.c:1400:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:11.446 11:16:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@800 -- # return 0 00:34:11.446 00:34:11.446 real 0m29.970s 00:34:11.446 user 0m45.858s 00:34:11.446 sys 0m3.844s 00:34:11.446 11:16:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:11.446 11:16:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:11.446 ************************************ 00:34:11.446 END TEST raid_rebuild_test_sb_md_interleaved 00:34:11.446 ************************************ 00:34:11.446 11:16:18 bdev_raid -- bdev/bdev_raid.sh@996 -- # trap - EXIT 00:34:11.446 11:16:18 bdev_raid -- bdev/bdev_raid.sh@997 -- # cleanup 00:34:11.446 11:16:18 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 3764921 ']' 00:34:11.446 11:16:18 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 3764921 00:34:11.446 11:16:18 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:34:11.446 00:34:11.447 real 20m31.699s 00:34:11.447 user 32m37.811s 00:34:11.447 sys 3m27.930s 00:34:11.447 11:16:18 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:11.447 11:16:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:11.447 ************************************ 00:34:11.447 END TEST bdev_raid 00:34:11.447 ************************************ 00:34:11.447 11:16:18 -- spdk/autotest.sh@195 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:34:11.447 11:16:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:11.447 11:16:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:11.447 11:16:18 -- common/autotest_common.sh@10 -- # set +x 00:34:11.447 ************************************ 00:34:11.447 START TEST bdevperf_config 00:34:11.447 ************************************ 00:34:11.447 11:16:18 bdevperf_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:34:11.447 * Looking for test storage... 00:34:11.447 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:11.447 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:11.447 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:11.447 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:11.447 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:11.447 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:11.447 11:16:18 bdevperf_config -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:16.715 11:16:23 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-25 11:16:18.469241] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:16.715 [2024-07-25 11:16:18.469356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770323 ] 00:34:16.715 Using job config with 4 jobs 00:34:16.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.715 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:16.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.715 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:16.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.715 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:16.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.715 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:16.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.715 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:16.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.715 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:16.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.715 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:16.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.715 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:16.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.715 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:16.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.715 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:16.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.715 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:16.715 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.715 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:16.716 [2024-07-25 11:16:18.721053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.716 [2024-07-25 11:16:19.022092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.716 cpumask for '\''job0'\'' is too big 00:34:16.716 cpumask for '\''job1'\'' is too big 00:34:16.716 cpumask for '\''job2'\'' is too big 00:34:16.716 cpumask for '\''job3'\'' is too big 00:34:16.716 Running I/O for 2 seconds... 00:34:16.716 00:34:16.716 Latency(us) 00:34:16.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.716 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.716 Malloc0 : 2.01 23394.02 22.85 0.00 0.00 10928.50 1966.08 17196.65 00:34:16.716 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.716 Malloc0 : 2.02 23404.22 22.86 0.00 0.00 10898.77 1926.76 15204.35 00:34:16.716 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.716 Malloc0 : 2.03 23383.25 22.84 0.00 0.00 10882.53 1952.97 13159.63 00:34:16.716 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.716 Malloc0 : 2.03 23362.39 22.81 0.00 0.00 10865.92 1952.97 11744.05 00:34:16.716 =================================================================================================================== 00:34:16.716 Total : 93543.88 91.35 0.00 0.00 10893.88 1926.76 17196.65' 00:34:16.716 11:16:23 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-25 11:16:18.469241] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:16.716 [2024-07-25 11:16:18.469356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770323 ] 00:34:16.716 Using job config with 4 jobs 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:16.716 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.716 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:16.716 [2024-07-25 11:16:18.721053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.716 [2024-07-25 11:16:19.022092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.716 cpumask for '\''job0'\'' is too big 00:34:16.717 cpumask for '\''job1'\'' is too big 00:34:16.717 cpumask for '\''job2'\'' is too big 00:34:16.717 cpumask for '\''job3'\'' is too big 00:34:16.717 Running I/O for 2 seconds... 00:34:16.717 00:34:16.717 Latency(us) 00:34:16.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.717 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.717 Malloc0 : 2.01 23394.02 22.85 0.00 0.00 10928.50 1966.08 17196.65 00:34:16.717 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.717 Malloc0 : 2.02 23404.22 22.86 0.00 0.00 10898.77 1926.76 15204.35 00:34:16.717 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.717 Malloc0 : 2.03 23383.25 22.84 0.00 0.00 10882.53 1952.97 13159.63 00:34:16.717 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.717 Malloc0 : 2.03 23362.39 22.81 0.00 0.00 10865.92 1952.97 11744.05 00:34:16.717 =================================================================================================================== 00:34:16.717 Total : 93543.88 91.35 0.00 0.00 10893.88 1926.76 17196.65' 00:34:16.717 11:16:23 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 11:16:18.469241] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:16.717 [2024-07-25 11:16:18.469356] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770323 ] 00:34:16.717 Using job config with 4 jobs 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:16.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.717 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:16.717 [2024-07-25 11:16:18.721053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.717 [2024-07-25 11:16:19.022092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.717 cpumask for '\''job0'\'' is too big 00:34:16.717 cpumask for '\''job1'\'' is too big 00:34:16.717 cpumask for '\''job2'\'' is too big 00:34:16.717 cpumask for '\''job3'\'' is too big 00:34:16.717 Running I/O for 2 seconds... 00:34:16.717 00:34:16.717 Latency(us) 00:34:16.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:16.717 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.717 Malloc0 : 2.01 23394.02 22.85 0.00 0.00 10928.50 1966.08 17196.65 00:34:16.717 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.717 Malloc0 : 2.02 23404.22 22.86 0.00 0.00 10898.77 1926.76 15204.35 00:34:16.717 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.717 Malloc0 : 2.03 23383.25 22.84 0.00 0.00 10882.53 1952.97 13159.63 00:34:16.717 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:16.717 Malloc0 : 2.03 23362.39 22.81 0.00 0.00 10865.92 1952.97 11744.05 00:34:16.717 =================================================================================================================== 00:34:16.717 Total : 93543.88 91.35 0.00 0.00 10893.88 1926.76 17196.65' 00:34:16.717 11:16:23 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:16.717 11:16:23 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:16.717 11:16:23 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:34:16.717 11:16:23 bdevperf_config -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:16.717 [2024-07-25 11:16:23.768810] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:16.717 [2024-07-25 11:16:23.768928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3771133 ] 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:16.976 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:16.976 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:16.976 [2024-07-25 11:16:24.011881] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:17.235 [2024-07-25 11:16:24.316419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.169 cpumask for 'job0' is too big 00:34:18.169 cpumask for 'job1' is too big 00:34:18.169 cpumask for 'job2' is too big 00:34:18.169 cpumask for 'job3' is too big 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:34:22.394 Running I/O for 2 seconds... 00:34:22.394 00:34:22.394 Latency(us) 00:34:22.394 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:22.394 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:22.394 Malloc0 : 2.02 23446.79 22.90 0.00 0.00 10905.58 1966.08 17091.79 00:34:22.394 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:22.394 Malloc0 : 2.02 23425.64 22.88 0.00 0.00 10888.21 1952.97 15099.49 00:34:22.394 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:22.394 Malloc0 : 2.02 23404.71 22.86 0.00 0.00 10873.66 1952.97 13159.63 00:34:22.394 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:22.394 Malloc0 : 2.03 23383.80 22.84 0.00 0.00 10858.47 1952.97 11272.19 00:34:22.394 =================================================================================================================== 00:34:22.394 Total : 93660.94 91.47 0.00 0.00 10881.48 1952.97 17091.79' 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:22.394 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:22.394 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:22.394 11:16:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:22.395 11:16:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:22.395 00:34:22.395 11:16:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:22.395 11:16:29 bdevperf_config -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:27.660 11:16:34 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-25 11:16:29.138537] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:27.660 [2024-07-25 11:16:29.138652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772115 ] 00:34:27.660 Using job config with 3 jobs 00:34:27.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.660 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:27.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.660 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:27.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.660 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:27.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.660 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:27.660 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.660 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:27.661 [2024-07-25 11:16:29.378606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.661 [2024-07-25 11:16:29.674347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.661 cpumask for '\''job0'\'' is too big 00:34:27.661 cpumask for '\''job1'\'' is too big 00:34:27.661 cpumask for '\''job2'\'' is too big 00:34:27.661 Running I/O for 2 seconds... 00:34:27.661 00:34:27.661 Latency(us) 00:34:27.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.661 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.661 Malloc0 : 2.02 31621.95 30.88 0.00 0.00 8092.23 1887.44 12058.62 00:34:27.661 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.661 Malloc0 : 2.02 31593.38 30.85 0.00 0.00 8081.29 1887.44 10171.19 00:34:27.661 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.661 Malloc0 : 2.02 31565.11 30.83 0.00 0.00 8069.06 1887.44 8388.61 00:34:27.661 =================================================================================================================== 00:34:27.661 Total : 94780.44 92.56 0.00 0.00 8080.86 1887.44 12058.62' 00:34:27.661 11:16:34 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-25 11:16:29.138537] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:27.661 [2024-07-25 11:16:29.138652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772115 ] 00:34:27.661 Using job config with 3 jobs 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:27.661 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.661 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:27.661 [2024-07-25 11:16:29.378606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.661 [2024-07-25 11:16:29.674347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.661 cpumask for '\''job0'\'' is too big 00:34:27.661 cpumask for '\''job1'\'' is too big 00:34:27.661 cpumask for '\''job2'\'' is too big 00:34:27.661 Running I/O for 2 seconds... 00:34:27.661 00:34:27.661 Latency(us) 00:34:27.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.661 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.661 Malloc0 : 2.02 31621.95 30.88 0.00 0.00 8092.23 1887.44 12058.62 00:34:27.661 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.661 Malloc0 : 2.02 31593.38 30.85 0.00 0.00 8081.29 1887.44 10171.19 00:34:27.661 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.661 Malloc0 : 2.02 31565.11 30.83 0.00 0.00 8069.06 1887.44 8388.61 00:34:27.662 =================================================================================================================== 00:34:27.662 Total : 94780.44 92.56 0.00 0.00 8080.86 1887.44 12058.62' 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 11:16:29.138537] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:27.662 [2024-07-25 11:16:29.138652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772115 ] 00:34:27.662 Using job config with 3 jobs 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:27.662 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:27.662 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:27.662 [2024-07-25 11:16:29.378606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:27.662 [2024-07-25 11:16:29.674347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:27.662 cpumask for '\''job0'\'' is too big 00:34:27.662 cpumask for '\''job1'\'' is too big 00:34:27.662 cpumask for '\''job2'\'' is too big 00:34:27.662 Running I/O for 2 seconds... 00:34:27.662 00:34:27.662 Latency(us) 00:34:27.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.662 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.662 Malloc0 : 2.02 31621.95 30.88 0.00 0.00 8092.23 1887.44 12058.62 00:34:27.662 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.662 Malloc0 : 2.02 31593.38 30.85 0.00 0.00 8081.29 1887.44 10171.19 00:34:27.662 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:27.662 Malloc0 : 2.02 31565.11 30.83 0.00 0.00 8069.06 1887.44 8388.61 00:34:27.662 =================================================================================================================== 00:34:27.662 Total : 94780.44 92.56 0.00 0.00 8080.86 1887.44 12058.62' 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:27.662 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:27.662 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:27.662 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:27.662 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:27.662 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:27.662 11:16:34 bdevperf_config -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:32.930 11:16:39 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-25 11:16:34.487914] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:32.930 [2024-07-25 11:16:34.488034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772967 ] 00:34:32.930 Using job config with 4 jobs 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:32.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.930 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:32.930 [2024-07-25 11:16:34.731879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.931 [2024-07-25 11:16:35.038496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.931 cpumask for '\''job0'\'' is too big 00:34:32.931 cpumask for '\''job1'\'' is too big 00:34:32.931 cpumask for '\''job2'\'' is too big 00:34:32.931 cpumask for '\''job3'\'' is too big 00:34:32.931 Running I/O for 2 seconds... 00:34:32.931 00:34:32.931 Latency(us) 00:34:32.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.931 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc0 : 2.03 11726.26 11.45 0.00 0.00 21815.27 4089.45 34812.72 00:34:32.931 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc1 : 2.03 11715.11 11.44 0.00 0.00 21814.00 4954.52 34603.01 00:34:32.931 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc0 : 2.03 11704.70 11.43 0.00 0.00 21750.66 4037.02 30618.42 00:34:32.931 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc1 : 2.04 11693.71 11.42 0.00 0.00 21747.65 4849.66 30408.70 00:34:32.931 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc0 : 2.05 11745.42 11.47 0.00 0.00 21571.96 4037.02 26319.26 00:34:32.931 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc1 : 2.05 11734.48 11.46 0.00 0.00 21571.68 4849.66 26214.40 00:34:32.931 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc0 : 2.05 11724.12 11.45 0.00 0.00 21510.69 4010.80 22229.81 00:34:32.931 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc1 : 2.05 11713.25 11.44 0.00 0.00 21507.45 5033.16 22229.81 00:34:32.931 =================================================================================================================== 00:34:32.931 Total : 93757.05 91.56 0.00 0.00 21660.53 4010.80 34812.72' 00:34:32.931 11:16:39 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-25 11:16:34.487914] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:32.931 [2024-07-25 11:16:34.488034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772967 ] 00:34:32.931 Using job config with 4 jobs 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:32.931 [2024-07-25 11:16:34.731879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.931 [2024-07-25 11:16:35.038496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.931 cpumask for '\''job0'\'' is too big 00:34:32.931 cpumask for '\''job1'\'' is too big 00:34:32.931 cpumask for '\''job2'\'' is too big 00:34:32.931 cpumask for '\''job3'\'' is too big 00:34:32.931 Running I/O for 2 seconds... 00:34:32.931 00:34:32.931 Latency(us) 00:34:32.931 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.931 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc0 : 2.03 11726.26 11.45 0.00 0.00 21815.27 4089.45 34812.72 00:34:32.931 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc1 : 2.03 11715.11 11.44 0.00 0.00 21814.00 4954.52 34603.01 00:34:32.931 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc0 : 2.03 11704.70 11.43 0.00 0.00 21750.66 4037.02 30618.42 00:34:32.931 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc1 : 2.04 11693.71 11.42 0.00 0.00 21747.65 4849.66 30408.70 00:34:32.931 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc0 : 2.05 11745.42 11.47 0.00 0.00 21571.96 4037.02 26319.26 00:34:32.931 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc1 : 2.05 11734.48 11.46 0.00 0.00 21571.68 4849.66 26214.40 00:34:32.931 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc0 : 2.05 11724.12 11.45 0.00 0.00 21510.69 4010.80 22229.81 00:34:32.931 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.931 Malloc1 : 2.05 11713.25 11.44 0.00 0.00 21507.45 5033.16 22229.81 00:34:32.931 =================================================================================================================== 00:34:32.931 Total : 93757.05 91.56 0.00 0.00 21660.53 4010.80 34812.72' 00:34:32.931 11:16:39 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:32.931 11:16:39 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-25 11:16:34.487914] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:32.931 [2024-07-25 11:16:34.488034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772967 ] 00:34:32.931 Using job config with 4 jobs 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:32.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.931 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:32.932 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:32.932 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:32.932 [2024-07-25 11:16:34.731879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:32.932 [2024-07-25 11:16:35.038496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.932 cpumask for '\''job0'\'' is too big 00:34:32.932 cpumask for '\''job1'\'' is too big 00:34:32.932 cpumask for '\''job2'\'' is too big 00:34:32.932 cpumask for '\''job3'\'' is too big 00:34:32.932 Running I/O for 2 seconds... 00:34:32.932 00:34:32.932 Latency(us) 00:34:32.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:32.932 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.932 Malloc0 : 2.03 11726.26 11.45 0.00 0.00 21815.27 4089.45 34812.72 00:34:32.932 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.932 Malloc1 : 2.03 11715.11 11.44 0.00 0.00 21814.00 4954.52 34603.01 00:34:32.932 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.932 Malloc0 : 2.03 11704.70 11.43 0.00 0.00 21750.66 4037.02 30618.42 00:34:32.932 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.932 Malloc1 : 2.04 11693.71 11.42 0.00 0.00 21747.65 4849.66 30408.70 00:34:32.932 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.932 Malloc0 : 2.05 11745.42 11.47 0.00 0.00 21571.96 4037.02 26319.26 00:34:32.932 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.932 Malloc1 : 2.05 11734.48 11.46 0.00 0.00 21571.68 4849.66 26214.40 00:34:32.932 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.932 Malloc0 : 2.05 11724.12 11.45 0.00 0.00 21510.69 4010.80 22229.81 00:34:32.932 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:34:32.932 Malloc1 : 2.05 11713.25 11.44 0.00 0.00 21507.45 5033.16 22229.81 00:34:32.932 =================================================================================================================== 00:34:32.932 Total : 93757.05 91.56 0.00 0.00 21660.53 4010.80 34812.72' 00:34:32.932 11:16:39 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:32.932 11:16:39 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:34:32.932 11:16:39 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:34:32.932 11:16:39 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:34:32.932 11:16:39 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:34:32.932 00:34:32.932 real 0m21.572s 00:34:32.932 user 0m19.623s 00:34:32.932 sys 0m1.745s 00:34:32.932 11:16:39 bdevperf_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:32.932 11:16:39 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:34:32.932 ************************************ 00:34:32.932 END TEST bdevperf_config 00:34:32.932 ************************************ 00:34:32.932 11:16:39 -- spdk/autotest.sh@196 -- # uname -s 00:34:32.932 11:16:39 -- spdk/autotest.sh@196 -- # [[ Linux == Linux ]] 00:34:32.932 11:16:39 -- spdk/autotest.sh@197 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:34:32.932 11:16:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:32.932 11:16:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:32.932 11:16:39 -- common/autotest_common.sh@10 -- # set +x 00:34:32.932 ************************************ 00:34:32.932 START TEST reactor_set_interrupt 00:34:32.932 ************************************ 00:34:32.932 11:16:39 reactor_set_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:34:32.932 * Looking for test storage... 00:34:32.932 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:32.932 11:16:39 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:34:32.932 11:16:39 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:34:32.932 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:32.932 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:32.932 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:34:32.932 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:32.932 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:34:32.932 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:34:32.932 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:34:32.932 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:34:32.932 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:34:32.932 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:34:32.932 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:34:32.932 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:34:32.932 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:34:32.932 11:16:40 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:34:32.933 11:16:40 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:34:32.933 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:34:32.933 11:16:40 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:34:32.933 11:16:40 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:34:33.195 #define SPDK_CONFIG_H 00:34:33.195 #define SPDK_CONFIG_APPS 1 00:34:33.195 #define SPDK_CONFIG_ARCH native 00:34:33.195 #define SPDK_CONFIG_ASAN 1 00:34:33.195 #undef SPDK_CONFIG_AVAHI 00:34:33.195 #undef SPDK_CONFIG_CET 00:34:33.195 #define SPDK_CONFIG_COVERAGE 1 00:34:33.195 #define SPDK_CONFIG_CROSS_PREFIX 00:34:33.195 #define SPDK_CONFIG_CRYPTO 1 00:34:33.195 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:34:33.195 #undef SPDK_CONFIG_CUSTOMOCF 00:34:33.195 #undef SPDK_CONFIG_DAOS 00:34:33.195 #define SPDK_CONFIG_DAOS_DIR 00:34:33.195 #define SPDK_CONFIG_DEBUG 1 00:34:33.195 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:34:33.195 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:34:33.195 #define SPDK_CONFIG_DPDK_INC_DIR 00:34:33.195 #define SPDK_CONFIG_DPDK_LIB_DIR 00:34:33.195 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:34:33.195 #undef SPDK_CONFIG_DPDK_UADK 00:34:33.195 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:34:33.195 #define SPDK_CONFIG_EXAMPLES 1 00:34:33.195 #undef SPDK_CONFIG_FC 00:34:33.195 #define SPDK_CONFIG_FC_PATH 00:34:33.195 #define SPDK_CONFIG_FIO_PLUGIN 1 00:34:33.195 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:34:33.195 #undef SPDK_CONFIG_FUSE 00:34:33.195 #undef SPDK_CONFIG_FUZZER 00:34:33.195 #define SPDK_CONFIG_FUZZER_LIB 00:34:33.195 #undef SPDK_CONFIG_GOLANG 00:34:33.195 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:34:33.195 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:34:33.195 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:34:33.195 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:34:33.195 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:34:33.195 #undef SPDK_CONFIG_HAVE_LIBBSD 00:34:33.195 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:34:33.195 #define SPDK_CONFIG_IDXD 1 00:34:33.195 #define SPDK_CONFIG_IDXD_KERNEL 1 00:34:33.195 #define SPDK_CONFIG_IPSEC_MB 1 00:34:33.195 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:34:33.195 #define SPDK_CONFIG_ISAL 1 00:34:33.195 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:34:33.195 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:34:33.195 #define SPDK_CONFIG_LIBDIR 00:34:33.195 #undef SPDK_CONFIG_LTO 00:34:33.195 #define SPDK_CONFIG_MAX_LCORES 128 00:34:33.195 #define SPDK_CONFIG_NVME_CUSE 1 00:34:33.195 #undef SPDK_CONFIG_OCF 00:34:33.195 #define SPDK_CONFIG_OCF_PATH 00:34:33.195 #define SPDK_CONFIG_OPENSSL_PATH 00:34:33.195 #undef SPDK_CONFIG_PGO_CAPTURE 00:34:33.195 #define SPDK_CONFIG_PGO_DIR 00:34:33.195 #undef SPDK_CONFIG_PGO_USE 00:34:33.195 #define SPDK_CONFIG_PREFIX /usr/local 00:34:33.195 #undef SPDK_CONFIG_RAID5F 00:34:33.195 #undef SPDK_CONFIG_RBD 00:34:33.195 #define SPDK_CONFIG_RDMA 1 00:34:33.195 #define SPDK_CONFIG_RDMA_PROV verbs 00:34:33.195 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:34:33.195 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:34:33.195 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:34:33.195 #define SPDK_CONFIG_SHARED 1 00:34:33.195 #undef SPDK_CONFIG_SMA 00:34:33.195 #define SPDK_CONFIG_TESTS 1 00:34:33.195 #undef SPDK_CONFIG_TSAN 00:34:33.195 #define SPDK_CONFIG_UBLK 1 00:34:33.195 #define SPDK_CONFIG_UBSAN 1 00:34:33.195 #undef SPDK_CONFIG_UNIT_TESTS 00:34:33.195 #undef SPDK_CONFIG_URING 00:34:33.195 #define SPDK_CONFIG_URING_PATH 00:34:33.195 #undef SPDK_CONFIG_URING_ZNS 00:34:33.195 #undef SPDK_CONFIG_USDT 00:34:33.195 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:34:33.195 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:34:33.195 #undef SPDK_CONFIG_VFIO_USER 00:34:33.195 #define SPDK_CONFIG_VFIO_USER_DIR 00:34:33.195 #define SPDK_CONFIG_VHOST 1 00:34:33.195 #define SPDK_CONFIG_VIRTIO 1 00:34:33.195 #undef SPDK_CONFIG_VTUNE 00:34:33.195 #define SPDK_CONFIG_VTUNE_DIR 00:34:33.195 #define SPDK_CONFIG_WERROR 1 00:34:33.195 #define SPDK_CONFIG_WPDK_DIR 00:34:33.195 #undef SPDK_CONFIG_XNVME 00:34:33.195 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:34:33.195 11:16:40 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:34:33.195 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:33.195 11:16:40 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:33.195 11:16:40 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.195 11:16:40 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.195 11:16:40 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.196 11:16:40 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.196 11:16:40 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.196 11:16:40 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:34:33.196 11:16:40 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:34:33.196 11:16:40 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 1 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 1 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 1 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:34:33.196 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 1 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@202 -- # cat 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export valgrind= 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@265 -- # valgrind= 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@271 -- # uname -s 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@281 -- # MAKE=make 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@301 -- # TEST_MODE= 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@320 -- # [[ -z 3773852 ]] 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@320 -- # kill -0 3773852 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local mount target_dir 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:34:33.197 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.fqUUrL 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.fqUUrL/tests/interrupt /tmp/spdk.fqUUrL 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@329 -- # df -T 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=954302464 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4330127360 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=54884909056 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742305280 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=6857396224 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30866341888 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871150592 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4808704 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=12338671616 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348461056 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=9789440 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=30870007808 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871154688 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=1146880 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # avails["$mount"]=6174224384 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174228480 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:34:33.198 * Looking for test storage... 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@370 -- # local target_space new_size 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mount=/ 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@376 -- # target_space=54884909056 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@383 -- # new_size=9071988736 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.198 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@391 -- # return 0 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=3773986 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 3773986 /var/tmp/spdk.sock 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3773986 ']' 00:34:33.198 11:16:40 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:34:33.198 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.199 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:33.199 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.199 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:33.199 11:16:40 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:33.199 [2024-07-25 11:16:40.263678] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:33.199 [2024-07-25 11:16:40.263797] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3773986 ] 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:33.458 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.458 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:33.459 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:33.459 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:33.459 [2024-07-25 11:16:40.490864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:33.717 [2024-07-25 11:16:40.764792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:33.717 [2024-07-25 11:16:40.764862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:33.717 [2024-07-25 11:16:40.764864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:34.284 [2024-07-25 11:16:41.215445] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:34.284 11:16:41 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:34.284 11:16:41 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:34.284 11:16:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:34:34.284 11:16:41 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:34.542 Malloc0 00:34:34.542 Malloc1 00:34:34.542 Malloc2 00:34:34.542 11:16:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:34:34.542 11:16:41 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:34:34.542 11:16:41 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:34.542 11:16:41 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:34:34.801 5000+0 records in 00:34:34.801 5000+0 records out 00:34:34.801 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0254189 s, 403 MB/s 00:34:34.801 11:16:41 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:34:35.060 AIO0 00:34:35.060 11:16:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 3773986 00:34:35.060 11:16:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 3773986 without_thd 00:34:35.060 11:16:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=3773986 00:34:35.060 11:16:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:34:35.060 11:16:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:34:35.060 11:16:41 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:34:35.060 11:16:41 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:34:35.060 11:16:41 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:35.060 11:16:41 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:34:35.060 11:16:41 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:35.060 11:16:41 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:34:35.060 11:16:41 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:34:35.318 spdk_thread ids are 1 on reactor0. 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 3773986 0 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3773986 0 idle 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3773986 00:34:35.318 11:16:42 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3773986 -w 256 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3773986 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:01.20 reactor_0' 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3773986 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:01.20 reactor_0 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 3773986 1 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3773986 1 idle 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3773986 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:35.575 11:16:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3773986 -w 256 00:34:35.576 11:16:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3774069 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:00.00 reactor_1' 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3774069 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:00.00 reactor_1 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 3773986 2 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3773986 2 idle 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3773986 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3773986 -w 256 00:34:35.832 11:16:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3774070 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:00.00 reactor_2' 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3774070 root 20 0 20.1t 204288 35840 S 0.0 0.3 0:00.00 reactor_2 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:34:36.090 11:16:42 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:34:36.090 [2024-07-25 11:16:43.186099] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:36.090 11:16:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:34:36.347 [2024-07-25 11:16:43.401781] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:34:36.347 [2024-07-25 11:16:43.402164] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:36.347 11:16:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:34:36.605 [2024-07-25 11:16:43.633696] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:34:36.605 [2024-07-25 11:16:43.633868] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:36.605 11:16:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:36.605 11:16:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 3773986 0 00:34:36.605 11:16:43 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 3773986 0 busy 00:34:36.605 11:16:43 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3773986 00:34:36.605 11:16:43 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:36.605 11:16:43 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:36.605 11:16:43 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:36.605 11:16:43 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:36.605 11:16:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:36.605 11:16:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:36.605 11:16:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3773986 -w 256 00:34:36.605 11:16:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3773986 root 20 0 20.1t 206976 35840 R 99.9 0.3 0:01.62 reactor_0' 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3773986 root 20 0 20.1t 206976 35840 R 99.9 0.3 0:01.62 reactor_0 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 3773986 2 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 3773986 2 busy 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3773986 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3773986 -w 256 00:34:36.862 11:16:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:37.119 11:16:43 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3774070 root 20 0 20.1t 206976 35840 R 93.3 0.3 0:00.35 reactor_2' 00:34:37.119 11:16:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3774070 root 20 0 20.1t 206976 35840 R 93.3 0.3 0:00.35 reactor_2 00:34:37.119 11:16:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:37.119 11:16:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:37.119 11:16:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.3 00:34:37.119 11:16:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:34:37.119 11:16:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:37.119 11:16:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:34:37.119 11:16:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:37.119 11:16:44 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:37.119 11:16:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:34:37.119 [2024-07-25 11:16:44.221711] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:34:37.119 [2024-07-25 11:16:44.221832] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 3773986 2 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3773986 2 idle 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3773986 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3773986 -w 256 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3774070 root 20 0 20.1t 206976 35840 S 0.0 0.3 0:00.58 reactor_2' 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3774070 root 20 0 20.1t 206976 35840 S 0.0 0.3 0:00.58 reactor_2 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:37.377 11:16:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:34:37.635 [2024-07-25 11:16:44.629710] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:34:37.635 [2024-07-25 11:16:44.629845] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:37.635 11:16:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:34:37.635 11:16:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:34:37.635 11:16:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:34:37.893 [2024-07-25 11:16:44.858043] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:37.894 11:16:44 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 3773986 0 00:34:37.894 11:16:44 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3773986 0 idle 00:34:37.894 11:16:44 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3773986 00:34:37.894 11:16:44 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:37.894 11:16:44 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:37.894 11:16:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:37.894 11:16:44 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:37.894 11:16:44 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:37.894 11:16:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:37.894 11:16:44 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:37.894 11:16:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3773986 -w 256 00:34:37.894 11:16:44 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3773986 root 20 0 20.1t 206976 35840 S 0.0 0.3 0:02.43 reactor_0' 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3773986 root 20 0 20.1t 206976 35840 S 0.0 0.3 0:02.43 reactor_0 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:34:38.152 11:16:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 3773986 00:34:38.152 11:16:45 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3773986 ']' 00:34:38.152 11:16:45 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 3773986 00:34:38.152 11:16:45 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:34:38.152 11:16:45 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:38.152 11:16:45 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3773986 00:34:38.152 11:16:45 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:38.152 11:16:45 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:38.152 11:16:45 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3773986' 00:34:38.152 killing process with pid 3773986 00:34:38.152 11:16:45 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 3773986 00:34:38.152 11:16:45 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 3773986 00:34:40.696 11:16:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:34:40.696 11:16:47 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:34:40.696 11:16:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:34:40.696 11:16:47 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.696 11:16:47 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:34:40.696 11:16:47 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=3775229 00:34:40.696 11:16:47 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:34:40.696 11:16:47 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:40.696 11:16:47 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 3775229 /var/tmp/spdk.sock 00:34:40.696 11:16:47 reactor_set_interrupt -- common/autotest_common.sh@831 -- # '[' -z 3775229 ']' 00:34:40.696 11:16:47 reactor_set_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.696 11:16:47 reactor_set_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:40.696 11:16:47 reactor_set_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.696 11:16:47 reactor_set_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:40.696 11:16:47 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:40.696 [2024-07-25 11:16:47.291608] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:40.696 [2024-07-25 11:16:47.291727] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775229 ] 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:40.696 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.696 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:40.697 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:40.697 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:40.697 [2024-07-25 11:16:47.515096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:40.697 [2024-07-25 11:16:47.803176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:40.697 [2024-07-25 11:16:47.803204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:40.697 [2024-07-25 11:16:47.803204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.276 [2024-07-25 11:16:48.291572] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:41.276 11:16:48 reactor_set_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:41.276 11:16:48 reactor_set_interrupt -- common/autotest_common.sh@864 -- # return 0 00:34:41.276 11:16:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:34:41.276 11:16:48 reactor_set_interrupt -- interrupt/common.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:41.535 Malloc0 00:34:41.535 Malloc1 00:34:41.535 Malloc2 00:34:41.535 11:16:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:34:41.535 11:16:48 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:34:41.535 11:16:48 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:41.535 11:16:48 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:34:41.794 5000+0 records in 00:34:41.794 5000+0 records out 00:34:41.794 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0254992 s, 402 MB/s 00:34:41.794 11:16:48 reactor_set_interrupt -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:34:41.794 AIO0 00:34:42.053 11:16:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 3775229 00:34:42.053 11:16:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 3775229 00:34:42.053 11:16:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=3775229 00:34:42.053 11:16:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:34:42.053 11:16:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:34:42.053 11:16:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:34:42.053 11:16:48 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:34:42.053 11:16:48 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:42.053 11:16:48 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:34:42.053 11:16:48 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:42.053 11:16:48 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:42.053 11:16:48 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:34:42.053 11:16:49 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:34:42.053 11:16:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:34:42.053 11:16:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:34:42.053 11:16:49 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:34:42.054 11:16:49 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:34:42.054 11:16:49 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:34:42.054 11:16:49 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:42.054 11:16:49 reactor_set_interrupt -- interrupt/common.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:34:42.054 11:16:49 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:34:42.312 11:16:49 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:34:42.312 11:16:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:34:42.312 11:16:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:34:42.312 spdk_thread ids are 1 on reactor0. 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 3775229 0 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3775229 0 idle 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3775229 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3775229 -w 256 00:34:42.313 11:16:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3775229 root 20 0 20.1t 202496 35840 S 0.0 0.3 0:01.24 reactor_0' 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3775229 root 20 0 20.1t 202496 35840 S 0.0 0.3 0:01.24 reactor_0 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 3775229 1 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3775229 1 idle 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3775229 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:34:42.572 11:16:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3775229 -w 256 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3775232 root 20 0 20.1t 202496 35840 S 0.0 0.3 0:00.00 reactor_1' 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3775232 root 20 0 20.1t 202496 35840 S 0.0 0.3 0:00.00 reactor_1 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 3775229 2 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3775229 2 idle 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3775229 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3775229 -w 256 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3775233 root 20 0 20.1t 202496 35840 S 0.0 0.3 0:00.00 reactor_2' 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3775233 root 20 0 20.1t 202496 35840 S 0.0 0.3 0:00.00 reactor_2 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:34:42.832 11:16:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:34:43.091 [2024-07-25 11:16:50.148248] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:34:43.091 [2024-07-25 11:16:50.148511] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:34:43.091 [2024-07-25 11:16:50.148772] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:43.091 11:16:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:34:43.351 [2024-07-25 11:16:50.380716] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:34:43.351 [2024-07-25 11:16:50.381005] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:43.351 11:16:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:43.351 11:16:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 3775229 0 00:34:43.351 11:16:50 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 3775229 0 busy 00:34:43.351 11:16:50 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3775229 00:34:43.351 11:16:50 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:43.351 11:16:50 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:43.351 11:16:50 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:43.351 11:16:50 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:43.351 11:16:50 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:43.351 11:16:50 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:43.351 11:16:50 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3775229 -w 256 00:34:43.351 11:16:50 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3775229 root 20 0 20.1t 205184 35840 R 99.9 0.3 0:01.67 reactor_0' 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3775229 root 20 0 20.1t 205184 35840 R 99.9 0.3 0:01.67 reactor_0 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 3775229 2 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 3775229 2 busy 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3775229 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3775229 -w 256 00:34:43.610 11:16:50 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3775233 root 20 0 20.1t 205184 35840 R 99.9 0.3 0:00.36 reactor_2' 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3775233 root 20 0 20.1t 205184 35840 R 99.9 0.3 0:00.36 reactor_2 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:34:43.870 [2024-07-25 11:16:50.966392] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:34:43.870 [2024-07-25 11:16:50.966548] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 3775229 2 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3775229 2 idle 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3775229 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:43.870 11:16:50 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:44.129 11:16:50 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:44.129 11:16:50 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:44.129 11:16:50 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:44.129 11:16:50 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3775229 -w 256 00:34:44.129 11:16:50 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:34:44.129 11:16:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3775233 root 20 0 20.1t 205184 35840 S 0.0 0.3 0:00.58 reactor_2' 00:34:44.129 11:16:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3775233 root 20 0 20.1t 205184 35840 S 0.0 0.3 0:00.58 reactor_2 00:34:44.129 11:16:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:44.129 11:16:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:44.129 11:16:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:34:44.129 11:16:51 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:34:44.129 11:16:51 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:44.129 11:16:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:44.129 11:16:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:34:44.129 11:16:51 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:44.129 11:16:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:34:44.389 [2024-07-25 11:16:51.375494] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:34:44.389 [2024-07-25 11:16:51.375669] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:34:44.389 [2024-07-25 11:16:51.375704] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 3775229 0 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 3775229 0 idle 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=3775229 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 3775229 -w 256 00:34:44.389 11:16:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor='3775229 root 20 0 20.1t 205184 35840 S 6.7 0.3 0:02.48 reactor_0' 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 3775229 root 20 0 20.1t 205184 35840 S 6.7 0.3 0:02.48 reactor_0 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:34:44.648 11:16:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 3775229 00:34:44.648 11:16:51 reactor_set_interrupt -- common/autotest_common.sh@950 -- # '[' -z 3775229 ']' 00:34:44.648 11:16:51 reactor_set_interrupt -- common/autotest_common.sh@954 -- # kill -0 3775229 00:34:44.648 11:16:51 reactor_set_interrupt -- common/autotest_common.sh@955 -- # uname 00:34:44.648 11:16:51 reactor_set_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:44.648 11:16:51 reactor_set_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3775229 00:34:44.648 11:16:51 reactor_set_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:44.648 11:16:51 reactor_set_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:44.648 11:16:51 reactor_set_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3775229' 00:34:44.648 killing process with pid 3775229 00:34:44.648 11:16:51 reactor_set_interrupt -- common/autotest_common.sh@969 -- # kill 3775229 00:34:44.648 11:16:51 reactor_set_interrupt -- common/autotest_common.sh@974 -- # wait 3775229 00:34:47.189 11:16:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:34:47.189 11:16:53 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:34:47.189 00:34:47.189 real 0m13.836s 00:34:47.189 user 0m14.006s 00:34:47.189 sys 0m2.397s 00:34:47.189 11:16:53 reactor_set_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:47.189 11:16:53 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:47.189 ************************************ 00:34:47.189 END TEST reactor_set_interrupt 00:34:47.189 ************************************ 00:34:47.189 11:16:53 -- spdk/autotest.sh@198 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:34:47.189 11:16:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:47.189 11:16:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:47.189 11:16:53 -- common/autotest_common.sh@10 -- # set +x 00:34:47.189 ************************************ 00:34:47.189 START TEST reap_unregistered_poller 00:34:47.189 ************************************ 00:34:47.189 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:34:47.189 * Looking for test storage... 00:34:47.189 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.189 11:16:53 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:34:47.189 11:16:53 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:34:47.189 11:16:53 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.189 11:16:53 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.189 11:16:53 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:34:47.189 11:16:53 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:47.189 11:16:53 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:34:47.189 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:34:47.189 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:34:47.189 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:34:47.189 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:34:47.189 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:34:47.189 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:34:47.189 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:34:47.189 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:34:47.189 11:16:53 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:34:47.190 11:16:53 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:34:47.190 11:16:53 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:34:47.190 11:16:53 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=y 00:34:47.190 11:16:53 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:34:47.190 11:16:53 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=y 00:34:47.190 11:16:53 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:34:47.190 11:16:53 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:34:47.190 11:16:53 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=y 00:34:47.190 11:16:53 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:34:47.190 11:16:53 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:34:47.190 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:34:47.190 #define SPDK_CONFIG_H 00:34:47.190 #define SPDK_CONFIG_APPS 1 00:34:47.190 #define SPDK_CONFIG_ARCH native 00:34:47.190 #define SPDK_CONFIG_ASAN 1 00:34:47.190 #undef SPDK_CONFIG_AVAHI 00:34:47.190 #undef SPDK_CONFIG_CET 00:34:47.190 #define SPDK_CONFIG_COVERAGE 1 00:34:47.190 #define SPDK_CONFIG_CROSS_PREFIX 00:34:47.190 #define SPDK_CONFIG_CRYPTO 1 00:34:47.190 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:34:47.190 #undef SPDK_CONFIG_CUSTOMOCF 00:34:47.190 #undef SPDK_CONFIG_DAOS 00:34:47.190 #define SPDK_CONFIG_DAOS_DIR 00:34:47.190 #define SPDK_CONFIG_DEBUG 1 00:34:47.190 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:34:47.190 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:34:47.190 #define SPDK_CONFIG_DPDK_INC_DIR 00:34:47.190 #define SPDK_CONFIG_DPDK_LIB_DIR 00:34:47.190 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:34:47.190 #undef SPDK_CONFIG_DPDK_UADK 00:34:47.190 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:34:47.190 #define SPDK_CONFIG_EXAMPLES 1 00:34:47.190 #undef SPDK_CONFIG_FC 00:34:47.190 #define SPDK_CONFIG_FC_PATH 00:34:47.190 #define SPDK_CONFIG_FIO_PLUGIN 1 00:34:47.190 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:34:47.190 #undef SPDK_CONFIG_FUSE 00:34:47.190 #undef SPDK_CONFIG_FUZZER 00:34:47.190 #define SPDK_CONFIG_FUZZER_LIB 00:34:47.190 #undef SPDK_CONFIG_GOLANG 00:34:47.190 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:34:47.190 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:34:47.190 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:34:47.190 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:34:47.190 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:34:47.190 #undef SPDK_CONFIG_HAVE_LIBBSD 00:34:47.190 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:34:47.190 #define SPDK_CONFIG_IDXD 1 00:34:47.190 #define SPDK_CONFIG_IDXD_KERNEL 1 00:34:47.190 #define SPDK_CONFIG_IPSEC_MB 1 00:34:47.190 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:34:47.190 #define SPDK_CONFIG_ISAL 1 00:34:47.190 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:34:47.190 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:34:47.190 #define SPDK_CONFIG_LIBDIR 00:34:47.190 #undef SPDK_CONFIG_LTO 00:34:47.190 #define SPDK_CONFIG_MAX_LCORES 128 00:34:47.190 #define SPDK_CONFIG_NVME_CUSE 1 00:34:47.190 #undef SPDK_CONFIG_OCF 00:34:47.190 #define SPDK_CONFIG_OCF_PATH 00:34:47.190 #define SPDK_CONFIG_OPENSSL_PATH 00:34:47.190 #undef SPDK_CONFIG_PGO_CAPTURE 00:34:47.190 #define SPDK_CONFIG_PGO_DIR 00:34:47.190 #undef SPDK_CONFIG_PGO_USE 00:34:47.190 #define SPDK_CONFIG_PREFIX /usr/local 00:34:47.190 #undef SPDK_CONFIG_RAID5F 00:34:47.190 #undef SPDK_CONFIG_RBD 00:34:47.190 #define SPDK_CONFIG_RDMA 1 00:34:47.190 #define SPDK_CONFIG_RDMA_PROV verbs 00:34:47.190 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:34:47.190 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:34:47.190 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:34:47.190 #define SPDK_CONFIG_SHARED 1 00:34:47.190 #undef SPDK_CONFIG_SMA 00:34:47.190 #define SPDK_CONFIG_TESTS 1 00:34:47.190 #undef SPDK_CONFIG_TSAN 00:34:47.190 #define SPDK_CONFIG_UBLK 1 00:34:47.190 #define SPDK_CONFIG_UBSAN 1 00:34:47.190 #undef SPDK_CONFIG_UNIT_TESTS 00:34:47.190 #undef SPDK_CONFIG_URING 00:34:47.190 #define SPDK_CONFIG_URING_PATH 00:34:47.190 #undef SPDK_CONFIG_URING_ZNS 00:34:47.190 #undef SPDK_CONFIG_USDT 00:34:47.190 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:34:47.190 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:34:47.190 #undef SPDK_CONFIG_VFIO_USER 00:34:47.190 #define SPDK_CONFIG_VFIO_USER_DIR 00:34:47.190 #define SPDK_CONFIG_VHOST 1 00:34:47.190 #define SPDK_CONFIG_VIRTIO 1 00:34:47.190 #undef SPDK_CONFIG_VTUNE 00:34:47.190 #define SPDK_CONFIG_VTUNE_DIR 00:34:47.190 #define SPDK_CONFIG_WERROR 1 00:34:47.190 #define SPDK_CONFIG_WPDK_DIR 00:34:47.190 #undef SPDK_CONFIG_XNVME 00:34:47.190 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:34:47.190 11:16:53 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:34:47.190 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:47.190 11:16:53 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:47.190 11:16:53 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:47.190 11:16:53 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:47.190 11:16:53 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.190 11:16:53 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.190 11:16:53 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.190 11:16:53 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:34:47.190 11:16:53 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:47.190 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:34:47.190 11:16:53 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:34:47.191 11:16:53 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:34:47.191 11:16:53 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:34:47.191 11:16:53 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:34:47.191 11:16:53 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:34:47.191 11:16:53 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:34:47.191 11:16:53 reap_unregistered_poller -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:34:47.191 11:16:53 reap_unregistered_poller -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:34:47.191 11:16:53 reap_unregistered_poller -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:34:47.191 11:16:53 reap_unregistered_poller -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:34:47.191 11:16:53 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power ]] 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 1 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:34:47.191 11:16:53 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 1 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 1 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 1 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0 00:34:47.191 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@202 -- # cat 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export valgrind= 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@265 -- # valgrind= 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@271 -- # uname -s 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@274 -- # [[ 1 -eq 1 ]] 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@278 -- # export HUGE_EVEN_ALLOC=yes 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@278 -- # HUGE_EVEN_ALLOC=yes 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@281 -- # MAKE=make 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j112 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@301 -- # TEST_MODE= 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@320 -- # [[ -z 3776384 ]] 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@320 -- # kill -0 3776384 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local mount target_dir 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.1Hbwyj 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.1Hbwyj/tests/interrupt /tmp/spdk.1Hbwyj 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@329 -- # df -T 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=954302464 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4330127360 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:34:47.192 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=54884569088 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=61742305280 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=6857736192 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30866341888 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871150592 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4808704 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=12338671616 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=12348461056 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=9789440 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=30870007808 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=30871154688 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=1146880 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # avails["$mount"]=6174224384 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@364 -- # sizes["$mount"]=6174228480 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:34:47.193 * Looking for test storage... 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@370 -- # local target_space new_size 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mount=/ 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@376 -- # target_space=54884569088 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@383 -- # new_size=9072328704 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.193 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@391 -- # return 0 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/common.sh 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=3776434 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:34:47.193 11:16:54 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 3776434 /var/tmp/spdk.sock 00:34:47.193 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@831 -- # '[' -z 3776434 ']' 00:34:47.194 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:47.194 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:47.194 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:47.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:47.194 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:47.194 11:16:54 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:47.194 [2024-07-25 11:16:54.164629] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:47.194 [2024-07-25 11:16:54.164747] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776434 ] 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:47.194 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:47.194 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:47.454 [2024-07-25 11:16:54.387578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:47.713 [2024-07-25 11:16:54.675206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:47.713 [2024-07-25 11:16:54.675293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:47.713 [2024-07-25 11:16:54.675309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:48.282 [2024-07-25 11:16:55.153498] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:48.282 11:16:55 reap_unregistered_poller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:48.282 11:16:55 reap_unregistered_poller -- common/autotest_common.sh@864 -- # return 0 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:34:48.282 11:16:55 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.282 11:16:55 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:48.282 11:16:55 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:34:48.282 "name": "app_thread", 00:34:48.282 "id": 1, 00:34:48.282 "active_pollers": [], 00:34:48.282 "timed_pollers": [ 00:34:48.282 { 00:34:48.282 "name": "rpc_subsystem_poll_servers", 00:34:48.282 "id": 1, 00:34:48.282 "state": "waiting", 00:34:48.282 "run_count": 0, 00:34:48.282 "busy_count": 0, 00:34:48.282 "period_ticks": 10000000 00:34:48.282 } 00:34:48.282 ], 00:34:48.282 "paused_pollers": [] 00:34:48.282 }' 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:34:48.282 5000+0 records in 00:34:48.282 5000+0 records out 00:34:48.282 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0248588 s, 412 MB/s 00:34:48.282 11:16:55 reap_unregistered_poller -- interrupt/common.sh@77 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:34:48.541 AIO0 00:34:48.541 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:48.801 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:34:49.061 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:34:49.061 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:34:49.061 11:16:55 reap_unregistered_poller -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.061 11:16:55 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:49.061 11:16:55 reap_unregistered_poller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.061 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:34:49.061 "name": "app_thread", 00:34:49.061 "id": 1, 00:34:49.061 "active_pollers": [], 00:34:49.061 "timed_pollers": [ 00:34:49.061 { 00:34:49.061 "name": "rpc_subsystem_poll_servers", 00:34:49.061 "id": 1, 00:34:49.061 "state": "waiting", 00:34:49.061 "run_count": 0, 00:34:49.061 "busy_count": 0, 00:34:49.061 "period_ticks": 10000000 00:34:49.061 } 00:34:49.061 ], 00:34:49.061 "paused_pollers": [] 00:34:49.061 }' 00:34:49.061 11:16:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:34:49.061 11:16:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:34:49.061 11:16:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:34:49.061 11:16:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:34:49.061 11:16:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:34:49.061 11:16:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:34:49.061 11:16:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:34:49.061 11:16:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 3776434 00:34:49.061 11:16:56 reap_unregistered_poller -- common/autotest_common.sh@950 -- # '[' -z 3776434 ']' 00:34:49.061 11:16:56 reap_unregistered_poller -- common/autotest_common.sh@954 -- # kill -0 3776434 00:34:49.061 11:16:56 reap_unregistered_poller -- common/autotest_common.sh@955 -- # uname 00:34:49.061 11:16:56 reap_unregistered_poller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:34:49.061 11:16:56 reap_unregistered_poller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3776434 00:34:49.061 11:16:56 reap_unregistered_poller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:34:49.061 11:16:56 reap_unregistered_poller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:34:49.061 11:16:56 reap_unregistered_poller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3776434' 00:34:49.061 killing process with pid 3776434 00:34:49.061 11:16:56 reap_unregistered_poller -- common/autotest_common.sh@969 -- # kill 3776434 00:34:49.061 11:16:56 reap_unregistered_poller -- common/autotest_common.sh@974 -- # wait 3776434 00:34:50.966 11:16:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:34:50.966 11:16:57 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:34:50.966 00:34:50.966 real 0m3.940s 00:34:50.966 user 0m3.461s 00:34:50.966 sys 0m0.818s 00:34:50.966 11:16:57 reap_unregistered_poller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:50.966 11:16:57 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:34:50.966 ************************************ 00:34:50.966 END TEST reap_unregistered_poller 00:34:50.966 ************************************ 00:34:50.966 11:16:57 -- spdk/autotest.sh@202 -- # uname -s 00:34:50.966 11:16:57 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:34:50.966 11:16:57 -- spdk/autotest.sh@203 -- # [[ 1 -eq 1 ]] 00:34:50.966 11:16:57 -- spdk/autotest.sh@209 -- # [[ 1 -eq 0 ]] 00:34:50.966 11:16:57 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@264 -- # timing_exit lib 00:34:50.966 11:16:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:50.966 11:16:57 -- common/autotest_common.sh@10 -- # set +x 00:34:50.966 11:16:57 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@351 -- # '[' 1 -eq 1 ']' 00:34:50.966 11:16:57 -- spdk/autotest.sh@352 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:34:50.966 11:16:57 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:34:50.966 11:16:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:50.966 11:16:57 -- common/autotest_common.sh@10 -- # set +x 00:34:50.966 ************************************ 00:34:50.966 START TEST compress_compdev 00:34:50.966 ************************************ 00:34:50.966 11:16:57 compress_compdev -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:34:50.966 * Looking for test storage... 00:34:50.966 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:34:50.966 11:16:57 compress_compdev -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@7 -- # uname -s 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:50.966 11:16:57 compress_compdev -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:34:50.966 11:16:57 compress_compdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:50.966 11:16:57 compress_compdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:50.966 11:16:57 compress_compdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:50.967 11:16:57 compress_compdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.967 11:16:57 compress_compdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.967 11:16:57 compress_compdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.967 11:16:57 compress_compdev -- paths/export.sh@5 -- # export PATH 00:34:50.967 11:16:57 compress_compdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:50.967 11:16:57 compress_compdev -- nvmf/common.sh@47 -- # : 0 00:34:50.967 11:16:57 compress_compdev -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:50.967 11:16:57 compress_compdev -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:50.967 11:16:57 compress_compdev -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:50.967 11:16:57 compress_compdev -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:50.967 11:16:57 compress_compdev -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:50.967 11:16:57 compress_compdev -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:50.967 11:16:57 compress_compdev -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:50.967 11:16:57 compress_compdev -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:50.967 11:16:57 compress_compdev -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:34:50.967 11:16:57 compress_compdev -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:34:50.967 11:16:57 compress_compdev -- compress/compress.sh@82 -- # test_type=compdev 00:34:50.967 11:16:57 compress_compdev -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:34:50.967 11:16:57 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:34:50.967 11:16:57 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=3777152 00:34:50.967 11:16:58 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:50.967 11:16:58 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 3777152 00:34:50.967 11:16:57 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:34:50.967 11:16:58 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 3777152 ']' 00:34:50.967 11:16:58 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.967 11:16:58 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:50.967 11:16:58 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.967 11:16:58 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:50.967 11:16:58 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:34:51.227 [2024-07-25 11:16:58.102753] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:51.227 [2024-07-25 11:16:58.102874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777152 ] 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:01.0 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:01.1 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:01.2 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:01.3 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:01.4 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:01.5 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:01.6 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:01.7 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:02.0 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:02.1 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:02.2 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:02.3 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:02.4 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:02.5 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:02.6 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3d:02.7 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:01.0 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:01.1 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:01.2 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:01.3 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:01.4 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:01.5 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:01.6 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:01.7 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:02.0 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:02.1 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:02.2 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:02.3 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:02.4 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:02.5 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:02.6 cannot be used 00:34:51.227 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:34:51.227 EAL: Requested device 0000:3f:02.7 cannot be used 00:34:51.227 [2024-07-25 11:16:58.317437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:51.487 [2024-07-25 11:16:58.604146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:51.487 [2024-07-25 11:16:58.604147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:53.393 [2024-07-25 11:16:59.999203] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:34:53.651 11:17:00 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:53.651 11:17:00 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:34:53.651 11:17:00 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:34:53.651 11:17:00 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:53.651 11:17:00 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:34:57.002 [2024-07-25 11:17:03.855635] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:34:57.002 11:17:03 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:34:57.002 11:17:03 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:34:57.002 11:17:03 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:57.002 11:17:03 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:34:57.002 11:17:03 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:57.002 11:17:03 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:57.002 11:17:03 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:57.261 11:17:04 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:34:57.261 [ 00:34:57.261 { 00:34:57.261 "name": "Nvme0n1", 00:34:57.261 "aliases": [ 00:34:57.261 "df6716ec-f567-43a1-8355-0a969864847b" 00:34:57.261 ], 00:34:57.261 "product_name": "NVMe disk", 00:34:57.261 "block_size": 512, 00:34:57.261 "num_blocks": 3907029168, 00:34:57.261 "uuid": "df6716ec-f567-43a1-8355-0a969864847b", 00:34:57.261 "assigned_rate_limits": { 00:34:57.261 "rw_ios_per_sec": 0, 00:34:57.261 "rw_mbytes_per_sec": 0, 00:34:57.261 "r_mbytes_per_sec": 0, 00:34:57.261 "w_mbytes_per_sec": 0 00:34:57.261 }, 00:34:57.261 "claimed": false, 00:34:57.261 "zoned": false, 00:34:57.261 "supported_io_types": { 00:34:57.261 "read": true, 00:34:57.261 "write": true, 00:34:57.261 "unmap": true, 00:34:57.261 "flush": true, 00:34:57.261 "reset": true, 00:34:57.261 "nvme_admin": true, 00:34:57.261 "nvme_io": true, 00:34:57.261 "nvme_io_md": false, 00:34:57.261 "write_zeroes": true, 00:34:57.261 "zcopy": false, 00:34:57.261 "get_zone_info": false, 00:34:57.261 "zone_management": false, 00:34:57.261 "zone_append": false, 00:34:57.261 "compare": false, 00:34:57.261 "compare_and_write": false, 00:34:57.261 "abort": true, 00:34:57.261 "seek_hole": false, 00:34:57.261 "seek_data": false, 00:34:57.261 "copy": false, 00:34:57.261 "nvme_iov_md": false 00:34:57.261 }, 00:34:57.261 "driver_specific": { 00:34:57.261 "nvme": [ 00:34:57.261 { 00:34:57.261 "pci_address": "0000:d8:00.0", 00:34:57.261 "trid": { 00:34:57.261 "trtype": "PCIe", 00:34:57.261 "traddr": "0000:d8:00.0" 00:34:57.261 }, 00:34:57.261 "ctrlr_data": { 00:34:57.261 "cntlid": 0, 00:34:57.261 "vendor_id": "0x8086", 00:34:57.262 "model_number": "INTEL SSDPE2KX020T8", 00:34:57.262 "serial_number": "BTLJ125505KA2P0BGN", 00:34:57.262 "firmware_revision": "VDV10170", 00:34:57.262 "oacs": { 00:34:57.262 "security": 0, 00:34:57.262 "format": 1, 00:34:57.262 "firmware": 1, 00:34:57.262 "ns_manage": 1 00:34:57.262 }, 00:34:57.262 "multi_ctrlr": false, 00:34:57.262 "ana_reporting": false 00:34:57.262 }, 00:34:57.262 "vs": { 00:34:57.262 "nvme_version": "1.2" 00:34:57.262 }, 00:34:57.262 "ns_data": { 00:34:57.262 "id": 1, 00:34:57.262 "can_share": false 00:34:57.262 } 00:34:57.262 } 00:34:57.262 ], 00:34:57.262 "mp_policy": "active_passive" 00:34:57.262 } 00:34:57.262 } 00:34:57.262 ] 00:34:57.262 11:17:04 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:34:57.262 11:17:04 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:34:57.521 [2024-07-25 11:17:04.560546] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:34:58.899 9061b747-5c9a-4282-99c5-494a30ecefea 00:34:58.899 11:17:05 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:34:58.899 c73a759b-17d6-46a8-a227-f1a699bef968 00:34:58.899 11:17:05 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:34:58.899 11:17:05 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:34:58.899 11:17:05 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:58.899 11:17:05 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:34:58.899 11:17:05 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:58.899 11:17:05 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:58.899 11:17:05 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:59.158 11:17:06 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:34:59.417 [ 00:34:59.417 { 00:34:59.417 "name": "c73a759b-17d6-46a8-a227-f1a699bef968", 00:34:59.417 "aliases": [ 00:34:59.417 "lvs0/lv0" 00:34:59.417 ], 00:34:59.417 "product_name": "Logical Volume", 00:34:59.417 "block_size": 512, 00:34:59.417 "num_blocks": 204800, 00:34:59.417 "uuid": "c73a759b-17d6-46a8-a227-f1a699bef968", 00:34:59.417 "assigned_rate_limits": { 00:34:59.417 "rw_ios_per_sec": 0, 00:34:59.417 "rw_mbytes_per_sec": 0, 00:34:59.417 "r_mbytes_per_sec": 0, 00:34:59.417 "w_mbytes_per_sec": 0 00:34:59.417 }, 00:34:59.417 "claimed": false, 00:34:59.417 "zoned": false, 00:34:59.417 "supported_io_types": { 00:34:59.417 "read": true, 00:34:59.417 "write": true, 00:34:59.417 "unmap": true, 00:34:59.417 "flush": false, 00:34:59.417 "reset": true, 00:34:59.417 "nvme_admin": false, 00:34:59.417 "nvme_io": false, 00:34:59.417 "nvme_io_md": false, 00:34:59.417 "write_zeroes": true, 00:34:59.417 "zcopy": false, 00:34:59.417 "get_zone_info": false, 00:34:59.417 "zone_management": false, 00:34:59.417 "zone_append": false, 00:34:59.417 "compare": false, 00:34:59.417 "compare_and_write": false, 00:34:59.417 "abort": false, 00:34:59.417 "seek_hole": true, 00:34:59.417 "seek_data": true, 00:34:59.417 "copy": false, 00:34:59.417 "nvme_iov_md": false 00:34:59.417 }, 00:34:59.417 "driver_specific": { 00:34:59.417 "lvol": { 00:34:59.417 "lvol_store_uuid": "9061b747-5c9a-4282-99c5-494a30ecefea", 00:34:59.417 "base_bdev": "Nvme0n1", 00:34:59.417 "thin_provision": true, 00:34:59.417 "num_allocated_clusters": 0, 00:34:59.417 "snapshot": false, 00:34:59.417 "clone": false, 00:34:59.417 "esnap_clone": false 00:34:59.417 } 00:34:59.417 } 00:34:59.417 } 00:34:59.418 ] 00:34:59.418 11:17:06 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:34:59.418 11:17:06 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:34:59.418 11:17:06 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:34:59.677 [2024-07-25 11:17:06.573879] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:34:59.677 COMP_lvs0/lv0 00:34:59.677 11:17:06 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:34:59.677 11:17:06 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:34:59.677 11:17:06 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:34:59.677 11:17:06 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:34:59.677 11:17:06 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:34:59.677 11:17:06 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:34:59.677 11:17:06 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:34:59.936 11:17:06 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:34:59.936 [ 00:34:59.936 { 00:34:59.936 "name": "COMP_lvs0/lv0", 00:34:59.936 "aliases": [ 00:34:59.936 "c156acb1-fa0a-5fa7-ae2e-07482f8f29b1" 00:34:59.936 ], 00:34:59.936 "product_name": "compress", 00:34:59.936 "block_size": 512, 00:34:59.936 "num_blocks": 200704, 00:34:59.936 "uuid": "c156acb1-fa0a-5fa7-ae2e-07482f8f29b1", 00:34:59.936 "assigned_rate_limits": { 00:34:59.936 "rw_ios_per_sec": 0, 00:34:59.936 "rw_mbytes_per_sec": 0, 00:34:59.936 "r_mbytes_per_sec": 0, 00:34:59.936 "w_mbytes_per_sec": 0 00:34:59.936 }, 00:34:59.936 "claimed": false, 00:34:59.936 "zoned": false, 00:34:59.936 "supported_io_types": { 00:34:59.936 "read": true, 00:34:59.936 "write": true, 00:34:59.936 "unmap": false, 00:34:59.936 "flush": false, 00:34:59.936 "reset": false, 00:34:59.936 "nvme_admin": false, 00:34:59.936 "nvme_io": false, 00:34:59.936 "nvme_io_md": false, 00:34:59.936 "write_zeroes": true, 00:34:59.936 "zcopy": false, 00:34:59.936 "get_zone_info": false, 00:34:59.936 "zone_management": false, 00:34:59.936 "zone_append": false, 00:34:59.936 "compare": false, 00:34:59.936 "compare_and_write": false, 00:34:59.936 "abort": false, 00:34:59.936 "seek_hole": false, 00:34:59.936 "seek_data": false, 00:34:59.936 "copy": false, 00:34:59.936 "nvme_iov_md": false 00:34:59.936 }, 00:34:59.936 "driver_specific": { 00:34:59.936 "compress": { 00:34:59.936 "name": "COMP_lvs0/lv0", 00:34:59.936 "base_bdev_name": "c73a759b-17d6-46a8-a227-f1a699bef968", 00:34:59.936 "pm_path": "/tmp/pmem/4397e147-be0d-45d3-a3c7-5c95829fc365" 00:34:59.936 } 00:34:59.936 } 00:34:59.936 } 00:34:59.936 ] 00:34:59.936 11:17:07 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:34:59.936 11:17:07 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:00.195 [2024-07-25 11:17:07.158842] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:35:00.195 [2024-07-25 11:17:07.162205] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d5a0 PMD being used: compress_qat 00:35:00.195 Running I/O for 3 seconds... 00:35:03.486 00:35:03.486 Latency(us) 00:35:03.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.486 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:35:03.486 Verification LBA range: start 0x0 length 0x3100 00:35:03.486 COMP_lvs0/lv0 : 3.00 3825.08 14.94 0.00 0.00 8317.03 135.99 13526.63 00:35:03.486 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:35:03.486 Verification LBA range: start 0x3100 length 0x3100 00:35:03.486 COMP_lvs0/lv0 : 3.00 3951.21 15.43 0.00 0.00 8060.09 125.34 13159.63 00:35:03.486 =================================================================================================================== 00:35:03.486 Total : 7776.29 30.38 0.00 0.00 8186.46 125.34 13526.63 00:35:03.486 0 00:35:03.486 11:17:10 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:35:03.486 11:17:10 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:03.486 11:17:10 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:03.745 11:17:10 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:35:03.745 11:17:10 compress_compdev -- compress/compress.sh@78 -- # killprocess 3777152 00:35:03.745 11:17:10 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 3777152 ']' 00:35:03.745 11:17:10 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 3777152 00:35:03.745 11:17:10 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:35:03.745 11:17:10 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:03.745 11:17:10 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3777152 00:35:03.745 11:17:10 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:03.745 11:17:10 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:03.745 11:17:10 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3777152' 00:35:03.745 killing process with pid 3777152 00:35:03.745 11:17:10 compress_compdev -- common/autotest_common.sh@969 -- # kill 3777152 00:35:03.745 Received shutdown signal, test time was about 3.000000 seconds 00:35:03.745 00:35:03.745 Latency(us) 00:35:03.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.745 =================================================================================================================== 00:35:03.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:03.745 11:17:10 compress_compdev -- common/autotest_common.sh@974 -- # wait 3777152 00:35:07.943 11:17:14 compress_compdev -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:35:07.943 11:17:14 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:35:07.943 11:17:14 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=3779787 00:35:07.943 11:17:14 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:07.943 11:17:14 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:35:07.943 11:17:14 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 3779787 00:35:07.943 11:17:14 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 3779787 ']' 00:35:07.943 11:17:14 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.943 11:17:14 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:07.943 11:17:14 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:07.943 11:17:14 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:07.943 11:17:14 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:07.943 [2024-07-25 11:17:14.441188] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:07.943 [2024-07-25 11:17:14.441317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779787 ] 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:07.944 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:07.944 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:07.944 [2024-07-25 11:17:14.659013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:07.944 [2024-07-25 11:17:14.944327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:07.944 [2024-07-25 11:17:14.944328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:09.325 [2024-07-25 11:17:16.339751] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:35:10.262 11:17:17 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:10.262 11:17:17 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:35:10.262 11:17:17 compress_compdev -- compress/compress.sh@74 -- # create_vols 512 00:35:10.262 11:17:17 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:10.262 11:17:17 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:13.552 [2024-07-25 11:17:20.190938] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:35:13.552 11:17:20 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:13.552 11:17:20 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:35:13.552 11:17:20 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:13.552 11:17:20 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:13.552 11:17:20 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:13.552 11:17:20 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:13.552 11:17:20 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:13.552 11:17:20 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:13.552 [ 00:35:13.552 { 00:35:13.552 "name": "Nvme0n1", 00:35:13.552 "aliases": [ 00:35:13.552 "8c2024b2-2e8f-4412-9543-30e69d4287a3" 00:35:13.552 ], 00:35:13.552 "product_name": "NVMe disk", 00:35:13.552 "block_size": 512, 00:35:13.552 "num_blocks": 3907029168, 00:35:13.552 "uuid": "8c2024b2-2e8f-4412-9543-30e69d4287a3", 00:35:13.552 "assigned_rate_limits": { 00:35:13.552 "rw_ios_per_sec": 0, 00:35:13.552 "rw_mbytes_per_sec": 0, 00:35:13.552 "r_mbytes_per_sec": 0, 00:35:13.552 "w_mbytes_per_sec": 0 00:35:13.552 }, 00:35:13.552 "claimed": false, 00:35:13.552 "zoned": false, 00:35:13.552 "supported_io_types": { 00:35:13.552 "read": true, 00:35:13.552 "write": true, 00:35:13.552 "unmap": true, 00:35:13.552 "flush": true, 00:35:13.552 "reset": true, 00:35:13.552 "nvme_admin": true, 00:35:13.552 "nvme_io": true, 00:35:13.552 "nvme_io_md": false, 00:35:13.552 "write_zeroes": true, 00:35:13.552 "zcopy": false, 00:35:13.552 "get_zone_info": false, 00:35:13.552 "zone_management": false, 00:35:13.552 "zone_append": false, 00:35:13.552 "compare": false, 00:35:13.552 "compare_and_write": false, 00:35:13.552 "abort": true, 00:35:13.552 "seek_hole": false, 00:35:13.552 "seek_data": false, 00:35:13.552 "copy": false, 00:35:13.552 "nvme_iov_md": false 00:35:13.552 }, 00:35:13.552 "driver_specific": { 00:35:13.552 "nvme": [ 00:35:13.552 { 00:35:13.552 "pci_address": "0000:d8:00.0", 00:35:13.552 "trid": { 00:35:13.552 "trtype": "PCIe", 00:35:13.552 "traddr": "0000:d8:00.0" 00:35:13.552 }, 00:35:13.552 "ctrlr_data": { 00:35:13.552 "cntlid": 0, 00:35:13.552 "vendor_id": "0x8086", 00:35:13.552 "model_number": "INTEL SSDPE2KX020T8", 00:35:13.552 "serial_number": "BTLJ125505KA2P0BGN", 00:35:13.552 "firmware_revision": "VDV10170", 00:35:13.552 "oacs": { 00:35:13.552 "security": 0, 00:35:13.552 "format": 1, 00:35:13.552 "firmware": 1, 00:35:13.552 "ns_manage": 1 00:35:13.552 }, 00:35:13.552 "multi_ctrlr": false, 00:35:13.552 "ana_reporting": false 00:35:13.552 }, 00:35:13.552 "vs": { 00:35:13.552 "nvme_version": "1.2" 00:35:13.552 }, 00:35:13.552 "ns_data": { 00:35:13.552 "id": 1, 00:35:13.552 "can_share": false 00:35:13.552 } 00:35:13.552 } 00:35:13.552 ], 00:35:13.552 "mp_policy": "active_passive" 00:35:13.552 } 00:35:13.552 } 00:35:13.552 ] 00:35:13.811 11:17:20 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:13.811 11:17:20 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:13.811 [2024-07-25 11:17:20.893131] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:35:14.747 5fc44dc3-6368-40d5-8b95-c0dccb0b38f8 00:35:15.006 11:17:21 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:35:15.006 1254f21a-b833-48a1-b0b6-dd21405ec2b3 00:35:15.006 11:17:22 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:35:15.006 11:17:22 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:35:15.006 11:17:22 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:15.006 11:17:22 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:15.006 11:17:22 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:15.006 11:17:22 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:15.006 11:17:22 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:15.265 11:17:22 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:35:15.560 [ 00:35:15.560 { 00:35:15.560 "name": "1254f21a-b833-48a1-b0b6-dd21405ec2b3", 00:35:15.560 "aliases": [ 00:35:15.560 "lvs0/lv0" 00:35:15.560 ], 00:35:15.560 "product_name": "Logical Volume", 00:35:15.560 "block_size": 512, 00:35:15.560 "num_blocks": 204800, 00:35:15.560 "uuid": "1254f21a-b833-48a1-b0b6-dd21405ec2b3", 00:35:15.560 "assigned_rate_limits": { 00:35:15.560 "rw_ios_per_sec": 0, 00:35:15.560 "rw_mbytes_per_sec": 0, 00:35:15.560 "r_mbytes_per_sec": 0, 00:35:15.560 "w_mbytes_per_sec": 0 00:35:15.560 }, 00:35:15.560 "claimed": false, 00:35:15.560 "zoned": false, 00:35:15.560 "supported_io_types": { 00:35:15.560 "read": true, 00:35:15.560 "write": true, 00:35:15.560 "unmap": true, 00:35:15.560 "flush": false, 00:35:15.560 "reset": true, 00:35:15.560 "nvme_admin": false, 00:35:15.560 "nvme_io": false, 00:35:15.560 "nvme_io_md": false, 00:35:15.560 "write_zeroes": true, 00:35:15.560 "zcopy": false, 00:35:15.560 "get_zone_info": false, 00:35:15.560 "zone_management": false, 00:35:15.560 "zone_append": false, 00:35:15.560 "compare": false, 00:35:15.560 "compare_and_write": false, 00:35:15.560 "abort": false, 00:35:15.560 "seek_hole": true, 00:35:15.560 "seek_data": true, 00:35:15.560 "copy": false, 00:35:15.560 "nvme_iov_md": false 00:35:15.560 }, 00:35:15.560 "driver_specific": { 00:35:15.560 "lvol": { 00:35:15.560 "lvol_store_uuid": "5fc44dc3-6368-40d5-8b95-c0dccb0b38f8", 00:35:15.560 "base_bdev": "Nvme0n1", 00:35:15.560 "thin_provision": true, 00:35:15.560 "num_allocated_clusters": 0, 00:35:15.560 "snapshot": false, 00:35:15.560 "clone": false, 00:35:15.560 "esnap_clone": false 00:35:15.560 } 00:35:15.560 } 00:35:15.560 } 00:35:15.560 ] 00:35:15.560 11:17:22 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:15.560 11:17:22 compress_compdev -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:35:15.560 11:17:22 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:35:15.819 [2024-07-25 11:17:22.796233] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:35:15.819 COMP_lvs0/lv0 00:35:15.819 11:17:22 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:35:15.819 11:17:22 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:35:15.819 11:17:22 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:15.819 11:17:22 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:15.819 11:17:22 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:15.819 11:17:22 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:15.819 11:17:22 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:16.078 11:17:23 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:35:16.340 [ 00:35:16.340 { 00:35:16.340 "name": "COMP_lvs0/lv0", 00:35:16.340 "aliases": [ 00:35:16.340 "c713379d-e541-5fa5-b13a-90413332193c" 00:35:16.340 ], 00:35:16.340 "product_name": "compress", 00:35:16.340 "block_size": 512, 00:35:16.340 "num_blocks": 200704, 00:35:16.340 "uuid": "c713379d-e541-5fa5-b13a-90413332193c", 00:35:16.340 "assigned_rate_limits": { 00:35:16.340 "rw_ios_per_sec": 0, 00:35:16.340 "rw_mbytes_per_sec": 0, 00:35:16.341 "r_mbytes_per_sec": 0, 00:35:16.341 "w_mbytes_per_sec": 0 00:35:16.341 }, 00:35:16.341 "claimed": false, 00:35:16.341 "zoned": false, 00:35:16.341 "supported_io_types": { 00:35:16.341 "read": true, 00:35:16.341 "write": true, 00:35:16.341 "unmap": false, 00:35:16.341 "flush": false, 00:35:16.341 "reset": false, 00:35:16.341 "nvme_admin": false, 00:35:16.341 "nvme_io": false, 00:35:16.341 "nvme_io_md": false, 00:35:16.341 "write_zeroes": true, 00:35:16.341 "zcopy": false, 00:35:16.341 "get_zone_info": false, 00:35:16.341 "zone_management": false, 00:35:16.341 "zone_append": false, 00:35:16.341 "compare": false, 00:35:16.341 "compare_and_write": false, 00:35:16.341 "abort": false, 00:35:16.341 "seek_hole": false, 00:35:16.341 "seek_data": false, 00:35:16.341 "copy": false, 00:35:16.341 "nvme_iov_md": false 00:35:16.341 }, 00:35:16.341 "driver_specific": { 00:35:16.341 "compress": { 00:35:16.341 "name": "COMP_lvs0/lv0", 00:35:16.341 "base_bdev_name": "1254f21a-b833-48a1-b0b6-dd21405ec2b3", 00:35:16.341 "pm_path": "/tmp/pmem/74002eb7-7be0-4752-86da-985fb2c151fc" 00:35:16.341 } 00:35:16.341 } 00:35:16.341 } 00:35:16.341 ] 00:35:16.341 11:17:23 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:16.341 11:17:23 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:16.341 [2024-07-25 11:17:23.373786] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:35:16.341 [2024-07-25 11:17:23.377058] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d4c0 PMD being used: compress_qat 00:35:16.341 Running I/O for 3 seconds... 00:35:19.635 00:35:19.635 Latency(us) 00:35:19.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.635 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:35:19.635 Verification LBA range: start 0x0 length 0x3100 00:35:19.635 COMP_lvs0/lv0 : 3.01 3837.15 14.99 0.00 0.00 8280.30 136.81 15309.21 00:35:19.635 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:35:19.635 Verification LBA range: start 0x3100 length 0x3100 00:35:19.635 COMP_lvs0/lv0 : 3.01 3988.42 15.58 0.00 0.00 7977.33 125.34 15204.35 00:35:19.635 =================================================================================================================== 00:35:19.635 Total : 7825.57 30.57 0.00 0.00 8125.86 125.34 15309.21 00:35:19.635 0 00:35:19.635 11:17:26 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:35:19.635 11:17:26 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:19.635 11:17:26 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:19.894 11:17:26 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:35:19.894 11:17:26 compress_compdev -- compress/compress.sh@78 -- # killprocess 3779787 00:35:19.894 11:17:26 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 3779787 ']' 00:35:19.894 11:17:26 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 3779787 00:35:19.894 11:17:26 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:35:19.895 11:17:26 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:19.895 11:17:26 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3779787 00:35:19.895 11:17:26 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:19.895 11:17:26 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:19.895 11:17:26 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3779787' 00:35:19.895 killing process with pid 3779787 00:35:19.895 11:17:26 compress_compdev -- common/autotest_common.sh@969 -- # kill 3779787 00:35:19.895 Received shutdown signal, test time was about 3.000000 seconds 00:35:19.895 00:35:19.895 Latency(us) 00:35:19.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.895 =================================================================================================================== 00:35:19.895 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:19.895 11:17:26 compress_compdev -- common/autotest_common.sh@974 -- # wait 3779787 00:35:24.095 11:17:30 compress_compdev -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:35:24.095 11:17:30 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:35:24.095 11:17:30 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=3782385 00:35:24.095 11:17:30 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:24.095 11:17:30 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:35:24.095 11:17:30 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 3782385 00:35:24.095 11:17:30 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 3782385 ']' 00:35:24.095 11:17:30 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.095 11:17:30 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:24.095 11:17:30 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.095 11:17:30 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:24.095 11:17:30 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:24.095 [2024-07-25 11:17:30.601929] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:24.095 [2024-07-25 11:17:30.602054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3782385 ] 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:24.095 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.095 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:24.096 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:24.096 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:24.096 [2024-07-25 11:17:30.813679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:24.096 [2024-07-25 11:17:31.094047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.096 [2024-07-25 11:17:31.094050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:25.475 [2024-07-25 11:17:32.455783] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:35:26.410 11:17:33 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:26.410 11:17:33 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:35:26.410 11:17:33 compress_compdev -- compress/compress.sh@74 -- # create_vols 4096 00:35:26.410 11:17:33 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:26.410 11:17:33 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:29.696 [2024-07-25 11:17:36.351485] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:35:29.696 11:17:36 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:29.696 11:17:36 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:35:29.696 11:17:36 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:29.696 11:17:36 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:29.696 11:17:36 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:29.696 11:17:36 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:29.697 11:17:36 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:29.697 11:17:36 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:29.955 [ 00:35:29.955 { 00:35:29.955 "name": "Nvme0n1", 00:35:29.955 "aliases": [ 00:35:29.955 "9d19a1ca-18dc-49eb-9edd-e1be412316ca" 00:35:29.955 ], 00:35:29.955 "product_name": "NVMe disk", 00:35:29.955 "block_size": 512, 00:35:29.955 "num_blocks": 3907029168, 00:35:29.955 "uuid": "9d19a1ca-18dc-49eb-9edd-e1be412316ca", 00:35:29.955 "assigned_rate_limits": { 00:35:29.955 "rw_ios_per_sec": 0, 00:35:29.955 "rw_mbytes_per_sec": 0, 00:35:29.955 "r_mbytes_per_sec": 0, 00:35:29.955 "w_mbytes_per_sec": 0 00:35:29.955 }, 00:35:29.955 "claimed": false, 00:35:29.955 "zoned": false, 00:35:29.955 "supported_io_types": { 00:35:29.955 "read": true, 00:35:29.955 "write": true, 00:35:29.955 "unmap": true, 00:35:29.955 "flush": true, 00:35:29.955 "reset": true, 00:35:29.955 "nvme_admin": true, 00:35:29.955 "nvme_io": true, 00:35:29.955 "nvme_io_md": false, 00:35:29.955 "write_zeroes": true, 00:35:29.955 "zcopy": false, 00:35:29.955 "get_zone_info": false, 00:35:29.955 "zone_management": false, 00:35:29.955 "zone_append": false, 00:35:29.955 "compare": false, 00:35:29.955 "compare_and_write": false, 00:35:29.955 "abort": true, 00:35:29.955 "seek_hole": false, 00:35:29.955 "seek_data": false, 00:35:29.955 "copy": false, 00:35:29.955 "nvme_iov_md": false 00:35:29.955 }, 00:35:29.955 "driver_specific": { 00:35:29.955 "nvme": [ 00:35:29.955 { 00:35:29.955 "pci_address": "0000:d8:00.0", 00:35:29.955 "trid": { 00:35:29.955 "trtype": "PCIe", 00:35:29.955 "traddr": "0000:d8:00.0" 00:35:29.955 }, 00:35:29.955 "ctrlr_data": { 00:35:29.955 "cntlid": 0, 00:35:29.955 "vendor_id": "0x8086", 00:35:29.955 "model_number": "INTEL SSDPE2KX020T8", 00:35:29.955 "serial_number": "BTLJ125505KA2P0BGN", 00:35:29.955 "firmware_revision": "VDV10170", 00:35:29.955 "oacs": { 00:35:29.955 "security": 0, 00:35:29.955 "format": 1, 00:35:29.955 "firmware": 1, 00:35:29.955 "ns_manage": 1 00:35:29.955 }, 00:35:29.955 "multi_ctrlr": false, 00:35:29.955 "ana_reporting": false 00:35:29.955 }, 00:35:29.955 "vs": { 00:35:29.955 "nvme_version": "1.2" 00:35:29.955 }, 00:35:29.955 "ns_data": { 00:35:29.955 "id": 1, 00:35:29.955 "can_share": false 00:35:29.955 } 00:35:29.955 } 00:35:29.955 ], 00:35:29.955 "mp_policy": "active_passive" 00:35:29.955 } 00:35:29.955 } 00:35:29.955 ] 00:35:29.955 11:17:36 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:29.955 11:17:36 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:29.955 [2024-07-25 11:17:37.055815] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:35:31.334 aeea22f6-d61f-44b7-b23e-449216f1b75c 00:35:31.334 11:17:38 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:35:31.334 7fb0b14f-a614-48c8-ba6c-e40a5ac4cf9d 00:35:31.334 11:17:38 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:35:31.334 11:17:38 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:35:31.334 11:17:38 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:31.334 11:17:38 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:31.334 11:17:38 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:31.334 11:17:38 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:31.334 11:17:38 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:31.592 11:17:38 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:35:31.851 [ 00:35:31.851 { 00:35:31.851 "name": "7fb0b14f-a614-48c8-ba6c-e40a5ac4cf9d", 00:35:31.851 "aliases": [ 00:35:31.851 "lvs0/lv0" 00:35:31.851 ], 00:35:31.851 "product_name": "Logical Volume", 00:35:31.851 "block_size": 512, 00:35:31.851 "num_blocks": 204800, 00:35:31.851 "uuid": "7fb0b14f-a614-48c8-ba6c-e40a5ac4cf9d", 00:35:31.851 "assigned_rate_limits": { 00:35:31.851 "rw_ios_per_sec": 0, 00:35:31.851 "rw_mbytes_per_sec": 0, 00:35:31.851 "r_mbytes_per_sec": 0, 00:35:31.851 "w_mbytes_per_sec": 0 00:35:31.851 }, 00:35:31.851 "claimed": false, 00:35:31.851 "zoned": false, 00:35:31.851 "supported_io_types": { 00:35:31.851 "read": true, 00:35:31.851 "write": true, 00:35:31.851 "unmap": true, 00:35:31.852 "flush": false, 00:35:31.852 "reset": true, 00:35:31.852 "nvme_admin": false, 00:35:31.852 "nvme_io": false, 00:35:31.852 "nvme_io_md": false, 00:35:31.852 "write_zeroes": true, 00:35:31.852 "zcopy": false, 00:35:31.852 "get_zone_info": false, 00:35:31.852 "zone_management": false, 00:35:31.852 "zone_append": false, 00:35:31.852 "compare": false, 00:35:31.852 "compare_and_write": false, 00:35:31.852 "abort": false, 00:35:31.852 "seek_hole": true, 00:35:31.852 "seek_data": true, 00:35:31.852 "copy": false, 00:35:31.852 "nvme_iov_md": false 00:35:31.852 }, 00:35:31.852 "driver_specific": { 00:35:31.852 "lvol": { 00:35:31.852 "lvol_store_uuid": "aeea22f6-d61f-44b7-b23e-449216f1b75c", 00:35:31.852 "base_bdev": "Nvme0n1", 00:35:31.852 "thin_provision": true, 00:35:31.852 "num_allocated_clusters": 0, 00:35:31.852 "snapshot": false, 00:35:31.852 "clone": false, 00:35:31.852 "esnap_clone": false 00:35:31.852 } 00:35:31.852 } 00:35:31.852 } 00:35:31.852 ] 00:35:31.852 11:17:38 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:31.852 11:17:38 compress_compdev -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:35:31.852 11:17:38 compress_compdev -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:35:32.111 [2024-07-25 11:17:39.019057] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:35:32.111 COMP_lvs0/lv0 00:35:32.111 11:17:39 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:35:32.111 11:17:39 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:35:32.111 11:17:39 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:32.111 11:17:39 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:32.111 11:17:39 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:32.111 11:17:39 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:32.111 11:17:39 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:32.370 11:17:39 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:35:32.370 [ 00:35:32.370 { 00:35:32.370 "name": "COMP_lvs0/lv0", 00:35:32.370 "aliases": [ 00:35:32.370 "4532a92e-1fb9-59fc-9e69-1a548191d846" 00:35:32.370 ], 00:35:32.370 "product_name": "compress", 00:35:32.370 "block_size": 4096, 00:35:32.370 "num_blocks": 25088, 00:35:32.370 "uuid": "4532a92e-1fb9-59fc-9e69-1a548191d846", 00:35:32.370 "assigned_rate_limits": { 00:35:32.370 "rw_ios_per_sec": 0, 00:35:32.370 "rw_mbytes_per_sec": 0, 00:35:32.370 "r_mbytes_per_sec": 0, 00:35:32.370 "w_mbytes_per_sec": 0 00:35:32.370 }, 00:35:32.370 "claimed": false, 00:35:32.370 "zoned": false, 00:35:32.370 "supported_io_types": { 00:35:32.370 "read": true, 00:35:32.370 "write": true, 00:35:32.370 "unmap": false, 00:35:32.370 "flush": false, 00:35:32.370 "reset": false, 00:35:32.370 "nvme_admin": false, 00:35:32.370 "nvme_io": false, 00:35:32.370 "nvme_io_md": false, 00:35:32.370 "write_zeroes": true, 00:35:32.370 "zcopy": false, 00:35:32.370 "get_zone_info": false, 00:35:32.370 "zone_management": false, 00:35:32.370 "zone_append": false, 00:35:32.370 "compare": false, 00:35:32.370 "compare_and_write": false, 00:35:32.370 "abort": false, 00:35:32.370 "seek_hole": false, 00:35:32.370 "seek_data": false, 00:35:32.370 "copy": false, 00:35:32.370 "nvme_iov_md": false 00:35:32.370 }, 00:35:32.370 "driver_specific": { 00:35:32.370 "compress": { 00:35:32.370 "name": "COMP_lvs0/lv0", 00:35:32.370 "base_bdev_name": "7fb0b14f-a614-48c8-ba6c-e40a5ac4cf9d", 00:35:32.370 "pm_path": "/tmp/pmem/27c83065-b247-440d-bc97-2991bf0fce97" 00:35:32.370 } 00:35:32.370 } 00:35:32.370 } 00:35:32.370 ] 00:35:32.370 11:17:39 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:32.370 11:17:39 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:35:32.629 [2024-07-25 11:17:39.595923] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:35:32.629 [2024-07-25 11:17:39.599192] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d4c0 PMD being used: compress_qat 00:35:32.629 Running I/O for 3 seconds... 00:35:35.953 00:35:35.953 Latency(us) 00:35:35.953 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.953 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:35:35.953 Verification LBA range: start 0x0 length 0x3100 00:35:35.953 COMP_lvs0/lv0 : 3.01 3768.47 14.72 0.00 0.00 8436.76 183.50 14155.78 00:35:35.953 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:35:35.953 Verification LBA range: start 0x3100 length 0x3100 00:35:35.953 COMP_lvs0/lv0 : 3.01 3830.15 14.96 0.00 0.00 8307.46 173.67 13526.63 00:35:35.953 =================================================================================================================== 00:35:35.953 Total : 7598.62 29.68 0.00 0.00 8371.56 173.67 14155.78 00:35:35.953 0 00:35:35.953 11:17:42 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:35:35.953 11:17:42 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:35.953 11:17:42 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:36.215 11:17:43 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:35:36.215 11:17:43 compress_compdev -- compress/compress.sh@78 -- # killprocess 3782385 00:35:36.215 11:17:43 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 3782385 ']' 00:35:36.215 11:17:43 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 3782385 00:35:36.215 11:17:43 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:35:36.215 11:17:43 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:36.215 11:17:43 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3782385 00:35:36.215 11:17:43 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:36.215 11:17:43 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:36.215 11:17:43 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3782385' 00:35:36.215 killing process with pid 3782385 00:35:36.215 11:17:43 compress_compdev -- common/autotest_common.sh@969 -- # kill 3782385 00:35:36.215 Received shutdown signal, test time was about 3.000000 seconds 00:35:36.215 00:35:36.215 Latency(us) 00:35:36.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.215 =================================================================================================================== 00:35:36.215 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:36.215 11:17:43 compress_compdev -- common/autotest_common.sh@974 -- # wait 3782385 00:35:40.409 11:17:46 compress_compdev -- compress/compress.sh@89 -- # run_bdevio 00:35:40.409 11:17:46 compress_compdev -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:35:40.409 11:17:46 compress_compdev -- compress/compress.sh@55 -- # bdevio_pid=3785040 00:35:40.409 11:17:46 compress_compdev -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:40.409 11:17:46 compress_compdev -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:35:40.409 11:17:46 compress_compdev -- compress/compress.sh@57 -- # waitforlisten 3785040 00:35:40.409 11:17:46 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 3785040 ']' 00:35:40.409 11:17:46 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:40.409 11:17:46 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:40.409 11:17:46 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:40.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:40.409 11:17:46 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:40.409 11:17:46 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:40.409 [2024-07-25 11:17:46.841250] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:40.409 [2024-07-25 11:17:46.841361] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785040 ] 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:40.409 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:40.409 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:40.409 [2024-07-25 11:17:47.067166] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:40.409 [2024-07-25 11:17:47.356874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:40.409 [2024-07-25 11:17:47.356941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.409 [2024-07-25 11:17:47.356945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:41.789 [2024-07-25 11:17:48.761747] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:35:42.365 11:17:49 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:42.365 11:17:49 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:35:42.365 11:17:49 compress_compdev -- compress/compress.sh@58 -- # create_vols 00:35:42.365 11:17:49 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:42.365 11:17:49 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:45.651 [2024-07-25 11:17:52.584992] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000026280 PMD being used: compress_qat 00:35:45.651 11:17:52 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:45.651 11:17:52 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:35:45.651 11:17:52 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:45.651 11:17:52 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:45.651 11:17:52 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:45.651 11:17:52 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:45.651 11:17:52 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:45.911 11:17:52 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:46.170 [ 00:35:46.170 { 00:35:46.170 "name": "Nvme0n1", 00:35:46.170 "aliases": [ 00:35:46.170 "6751ba73-e952-4abd-bbdc-7d94019d7969" 00:35:46.170 ], 00:35:46.170 "product_name": "NVMe disk", 00:35:46.170 "block_size": 512, 00:35:46.170 "num_blocks": 3907029168, 00:35:46.170 "uuid": "6751ba73-e952-4abd-bbdc-7d94019d7969", 00:35:46.170 "assigned_rate_limits": { 00:35:46.170 "rw_ios_per_sec": 0, 00:35:46.170 "rw_mbytes_per_sec": 0, 00:35:46.170 "r_mbytes_per_sec": 0, 00:35:46.170 "w_mbytes_per_sec": 0 00:35:46.170 }, 00:35:46.170 "claimed": false, 00:35:46.170 "zoned": false, 00:35:46.170 "supported_io_types": { 00:35:46.170 "read": true, 00:35:46.170 "write": true, 00:35:46.170 "unmap": true, 00:35:46.170 "flush": true, 00:35:46.170 "reset": true, 00:35:46.170 "nvme_admin": true, 00:35:46.170 "nvme_io": true, 00:35:46.170 "nvme_io_md": false, 00:35:46.170 "write_zeroes": true, 00:35:46.170 "zcopy": false, 00:35:46.170 "get_zone_info": false, 00:35:46.170 "zone_management": false, 00:35:46.170 "zone_append": false, 00:35:46.170 "compare": false, 00:35:46.170 "compare_and_write": false, 00:35:46.170 "abort": true, 00:35:46.170 "seek_hole": false, 00:35:46.170 "seek_data": false, 00:35:46.170 "copy": false, 00:35:46.170 "nvme_iov_md": false 00:35:46.170 }, 00:35:46.170 "driver_specific": { 00:35:46.170 "nvme": [ 00:35:46.170 { 00:35:46.170 "pci_address": "0000:d8:00.0", 00:35:46.170 "trid": { 00:35:46.170 "trtype": "PCIe", 00:35:46.170 "traddr": "0000:d8:00.0" 00:35:46.170 }, 00:35:46.170 "ctrlr_data": { 00:35:46.170 "cntlid": 0, 00:35:46.170 "vendor_id": "0x8086", 00:35:46.170 "model_number": "INTEL SSDPE2KX020T8", 00:35:46.170 "serial_number": "BTLJ125505KA2P0BGN", 00:35:46.170 "firmware_revision": "VDV10170", 00:35:46.170 "oacs": { 00:35:46.170 "security": 0, 00:35:46.170 "format": 1, 00:35:46.170 "firmware": 1, 00:35:46.170 "ns_manage": 1 00:35:46.170 }, 00:35:46.170 "multi_ctrlr": false, 00:35:46.170 "ana_reporting": false 00:35:46.170 }, 00:35:46.170 "vs": { 00:35:46.170 "nvme_version": "1.2" 00:35:46.170 }, 00:35:46.170 "ns_data": { 00:35:46.170 "id": 1, 00:35:46.170 "can_share": false 00:35:46.170 } 00:35:46.170 } 00:35:46.170 ], 00:35:46.170 "mp_policy": "active_passive" 00:35:46.170 } 00:35:46.170 } 00:35:46.170 ] 00:35:46.170 11:17:53 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:46.170 11:17:53 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:46.170 [2024-07-25 11:17:53.283619] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e000026440 PMD being used: compress_qat 00:35:47.557 6e2d0208-2b18-4f62-9f67-ee93e384ebf7 00:35:47.557 11:17:54 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:35:47.557 114c6185-c625-481f-933b-60d04c9ad5d5 00:35:47.557 11:17:54 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:35:47.557 11:17:54 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:35:47.557 11:17:54 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:47.557 11:17:54 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:47.557 11:17:54 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:47.557 11:17:54 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:47.557 11:17:54 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:47.817 11:17:54 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:35:48.076 [ 00:35:48.076 { 00:35:48.076 "name": "114c6185-c625-481f-933b-60d04c9ad5d5", 00:35:48.076 "aliases": [ 00:35:48.076 "lvs0/lv0" 00:35:48.076 ], 00:35:48.076 "product_name": "Logical Volume", 00:35:48.076 "block_size": 512, 00:35:48.076 "num_blocks": 204800, 00:35:48.076 "uuid": "114c6185-c625-481f-933b-60d04c9ad5d5", 00:35:48.076 "assigned_rate_limits": { 00:35:48.077 "rw_ios_per_sec": 0, 00:35:48.077 "rw_mbytes_per_sec": 0, 00:35:48.077 "r_mbytes_per_sec": 0, 00:35:48.077 "w_mbytes_per_sec": 0 00:35:48.077 }, 00:35:48.077 "claimed": false, 00:35:48.077 "zoned": false, 00:35:48.077 "supported_io_types": { 00:35:48.077 "read": true, 00:35:48.077 "write": true, 00:35:48.077 "unmap": true, 00:35:48.077 "flush": false, 00:35:48.077 "reset": true, 00:35:48.077 "nvme_admin": false, 00:35:48.077 "nvme_io": false, 00:35:48.077 "nvme_io_md": false, 00:35:48.077 "write_zeroes": true, 00:35:48.077 "zcopy": false, 00:35:48.077 "get_zone_info": false, 00:35:48.077 "zone_management": false, 00:35:48.077 "zone_append": false, 00:35:48.077 "compare": false, 00:35:48.077 "compare_and_write": false, 00:35:48.077 "abort": false, 00:35:48.077 "seek_hole": true, 00:35:48.077 "seek_data": true, 00:35:48.077 "copy": false, 00:35:48.077 "nvme_iov_md": false 00:35:48.077 }, 00:35:48.077 "driver_specific": { 00:35:48.077 "lvol": { 00:35:48.077 "lvol_store_uuid": "6e2d0208-2b18-4f62-9f67-ee93e384ebf7", 00:35:48.077 "base_bdev": "Nvme0n1", 00:35:48.077 "thin_provision": true, 00:35:48.077 "num_allocated_clusters": 0, 00:35:48.077 "snapshot": false, 00:35:48.077 "clone": false, 00:35:48.077 "esnap_clone": false 00:35:48.077 } 00:35:48.077 } 00:35:48.077 } 00:35:48.077 ] 00:35:48.077 11:17:54 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:48.077 11:17:54 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:35:48.077 11:17:54 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:35:48.077 [2024-07-25 11:17:55.184132] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:35:48.077 COMP_lvs0/lv0 00:35:48.336 11:17:55 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:35:48.336 11:17:55 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:35:48.336 11:17:55 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:48.336 11:17:55 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:48.336 11:17:55 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:48.336 11:17:55 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:48.336 11:17:55 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:48.336 11:17:55 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:35:48.594 [ 00:35:48.594 { 00:35:48.594 "name": "COMP_lvs0/lv0", 00:35:48.594 "aliases": [ 00:35:48.594 "d8a30943-3879-5aa6-a778-34b1f827e546" 00:35:48.594 ], 00:35:48.594 "product_name": "compress", 00:35:48.594 "block_size": 512, 00:35:48.594 "num_blocks": 200704, 00:35:48.594 "uuid": "d8a30943-3879-5aa6-a778-34b1f827e546", 00:35:48.594 "assigned_rate_limits": { 00:35:48.594 "rw_ios_per_sec": 0, 00:35:48.594 "rw_mbytes_per_sec": 0, 00:35:48.594 "r_mbytes_per_sec": 0, 00:35:48.594 "w_mbytes_per_sec": 0 00:35:48.594 }, 00:35:48.594 "claimed": false, 00:35:48.595 "zoned": false, 00:35:48.595 "supported_io_types": { 00:35:48.595 "read": true, 00:35:48.595 "write": true, 00:35:48.595 "unmap": false, 00:35:48.595 "flush": false, 00:35:48.595 "reset": false, 00:35:48.595 "nvme_admin": false, 00:35:48.595 "nvme_io": false, 00:35:48.595 "nvme_io_md": false, 00:35:48.595 "write_zeroes": true, 00:35:48.595 "zcopy": false, 00:35:48.595 "get_zone_info": false, 00:35:48.595 "zone_management": false, 00:35:48.595 "zone_append": false, 00:35:48.595 "compare": false, 00:35:48.595 "compare_and_write": false, 00:35:48.595 "abort": false, 00:35:48.595 "seek_hole": false, 00:35:48.595 "seek_data": false, 00:35:48.595 "copy": false, 00:35:48.595 "nvme_iov_md": false 00:35:48.595 }, 00:35:48.595 "driver_specific": { 00:35:48.595 "compress": { 00:35:48.595 "name": "COMP_lvs0/lv0", 00:35:48.595 "base_bdev_name": "114c6185-c625-481f-933b-60d04c9ad5d5", 00:35:48.595 "pm_path": "/tmp/pmem/7a25d154-f9a9-4bfe-ac47-54721d2977ed" 00:35:48.595 } 00:35:48.595 } 00:35:48.595 } 00:35:48.595 ] 00:35:48.595 11:17:55 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:48.595 11:17:55 compress_compdev -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:35:48.854 [2024-07-25 11:17:55.765134] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000171e0 PMD being used: compress_qat 00:35:48.854 I/O targets: 00:35:48.854 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:35:48.854 00:35:48.854 00:35:48.854 CUnit - A unit testing framework for C - Version 2.1-3 00:35:48.854 http://cunit.sourceforge.net/ 00:35:48.854 00:35:48.854 00:35:48.854 Suite: bdevio tests on: COMP_lvs0/lv0 00:35:48.854 Test: blockdev write read block ...passed 00:35:48.854 Test: blockdev write zeroes read block ...passed 00:35:48.854 Test: blockdev write zeroes read no split ...passed 00:35:48.854 Test: blockdev write zeroes read split ...passed 00:35:48.854 Test: blockdev write zeroes read split partial ...passed 00:35:48.854 Test: blockdev reset ...[2024-07-25 11:17:55.902609] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:35:48.854 passed 00:35:48.854 Test: blockdev write read 8 blocks ...passed 00:35:48.854 Test: blockdev write read size > 128k ...passed 00:35:48.854 Test: blockdev write read invalid size ...passed 00:35:48.854 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:48.854 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:48.854 Test: blockdev write read max offset ...passed 00:35:48.854 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:48.854 Test: blockdev writev readv 8 blocks ...passed 00:35:48.854 Test: blockdev writev readv 30 x 1block ...passed 00:35:48.854 Test: blockdev writev readv block ...passed 00:35:48.854 Test: blockdev writev readv size > 128k ...passed 00:35:48.854 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:48.854 Test: blockdev comparev and writev ...passed 00:35:48.854 Test: blockdev nvme passthru rw ...passed 00:35:48.854 Test: blockdev nvme passthru vendor specific ...passed 00:35:48.854 Test: blockdev nvme admin passthru ...passed 00:35:48.854 Test: blockdev copy ...passed 00:35:48.854 00:35:48.854 Run Summary: Type Total Ran Passed Failed Inactive 00:35:48.854 suites 1 1 n/a 0 0 00:35:48.854 tests 23 23 23 0 0 00:35:48.854 asserts 130 130 130 0 n/a 00:35:48.854 00:35:48.854 Elapsed time = 0.433 seconds 00:35:48.854 0 00:35:48.854 11:17:55 compress_compdev -- compress/compress.sh@60 -- # destroy_vols 00:35:48.854 11:17:55 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:35:49.112 11:17:56 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:35:49.371 11:17:56 compress_compdev -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:35:49.371 11:17:56 compress_compdev -- compress/compress.sh@62 -- # killprocess 3785040 00:35:49.371 11:17:56 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 3785040 ']' 00:35:49.371 11:17:56 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 3785040 00:35:49.371 11:17:56 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:35:49.371 11:17:56 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:49.371 11:17:56 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3785040 00:35:49.629 11:17:56 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:49.629 11:17:56 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:49.629 11:17:56 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3785040' 00:35:49.629 killing process with pid 3785040 00:35:49.629 11:17:56 compress_compdev -- common/autotest_common.sh@969 -- # kill 3785040 00:35:49.629 11:17:56 compress_compdev -- common/autotest_common.sh@974 -- # wait 3785040 00:35:53.820 11:18:00 compress_compdev -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:35:53.820 11:18:00 compress_compdev -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:35:53.820 11:18:00 compress_compdev -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:35:53.820 11:18:00 compress_compdev -- compress/compress.sh@71 -- # bdevperf_pid=3787245 00:35:53.820 11:18:00 compress_compdev -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:53.820 11:18:00 compress_compdev -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:35:53.820 11:18:00 compress_compdev -- compress/compress.sh@73 -- # waitforlisten 3787245 00:35:53.820 11:18:00 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 3787245 ']' 00:35:53.820 11:18:00 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:53.820 11:18:00 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:53.820 11:18:00 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:53.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:53.820 11:18:00 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:53.820 11:18:00 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:35:53.820 [2024-07-25 11:18:00.196494] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:53.820 [2024-07-25 11:18:00.196617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3787245 ] 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:01.0 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:01.1 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:01.2 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:01.3 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:01.4 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:01.5 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:01.6 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:01.7 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:02.0 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:02.1 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:02.2 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:02.3 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:02.4 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:02.5 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:02.6 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3d:02.7 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:01.0 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:01.1 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:01.2 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:01.3 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:01.4 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:01.5 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:01.6 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:01.7 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:02.0 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:02.1 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:02.2 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:02.3 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:02.4 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:02.5 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:02.6 cannot be used 00:35:53.820 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:35:53.820 EAL: Requested device 0000:3f:02.7 cannot be used 00:35:53.820 [2024-07-25 11:18:00.411697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:53.820 [2024-07-25 11:18:00.684353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:53.820 [2024-07-25 11:18:00.684356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:55.224 [2024-07-25 11:18:02.048731] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:35:55.828 11:18:02 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:55.828 11:18:02 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:35:55.828 11:18:02 compress_compdev -- compress/compress.sh@74 -- # create_vols 00:35:55.828 11:18:02 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:55.828 11:18:02 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:35:59.111 [2024-07-25 11:18:05.934916] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d220 PMD being used: compress_qat 00:35:59.111 11:18:05 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:35:59.111 11:18:05 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:35:59.111 11:18:05 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:35:59.111 11:18:05 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:35:59.111 11:18:05 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:35:59.111 11:18:05 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:35:59.111 11:18:05 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:59.111 11:18:06 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:35:59.369 [ 00:35:59.369 { 00:35:59.369 "name": "Nvme0n1", 00:35:59.369 "aliases": [ 00:35:59.369 "21ab20af-c2f7-4691-a45c-1a495b38e015" 00:35:59.369 ], 00:35:59.369 "product_name": "NVMe disk", 00:35:59.369 "block_size": 512, 00:35:59.369 "num_blocks": 3907029168, 00:35:59.369 "uuid": "21ab20af-c2f7-4691-a45c-1a495b38e015", 00:35:59.369 "assigned_rate_limits": { 00:35:59.369 "rw_ios_per_sec": 0, 00:35:59.369 "rw_mbytes_per_sec": 0, 00:35:59.369 "r_mbytes_per_sec": 0, 00:35:59.369 "w_mbytes_per_sec": 0 00:35:59.369 }, 00:35:59.369 "claimed": false, 00:35:59.369 "zoned": false, 00:35:59.369 "supported_io_types": { 00:35:59.369 "read": true, 00:35:59.369 "write": true, 00:35:59.369 "unmap": true, 00:35:59.369 "flush": true, 00:35:59.369 "reset": true, 00:35:59.369 "nvme_admin": true, 00:35:59.369 "nvme_io": true, 00:35:59.369 "nvme_io_md": false, 00:35:59.369 "write_zeroes": true, 00:35:59.369 "zcopy": false, 00:35:59.369 "get_zone_info": false, 00:35:59.369 "zone_management": false, 00:35:59.369 "zone_append": false, 00:35:59.369 "compare": false, 00:35:59.369 "compare_and_write": false, 00:35:59.369 "abort": true, 00:35:59.370 "seek_hole": false, 00:35:59.370 "seek_data": false, 00:35:59.370 "copy": false, 00:35:59.370 "nvme_iov_md": false 00:35:59.370 }, 00:35:59.370 "driver_specific": { 00:35:59.370 "nvme": [ 00:35:59.370 { 00:35:59.370 "pci_address": "0000:d8:00.0", 00:35:59.370 "trid": { 00:35:59.370 "trtype": "PCIe", 00:35:59.370 "traddr": "0000:d8:00.0" 00:35:59.370 }, 00:35:59.370 "ctrlr_data": { 00:35:59.370 "cntlid": 0, 00:35:59.370 "vendor_id": "0x8086", 00:35:59.370 "model_number": "INTEL SSDPE2KX020T8", 00:35:59.370 "serial_number": "BTLJ125505KA2P0BGN", 00:35:59.370 "firmware_revision": "VDV10170", 00:35:59.370 "oacs": { 00:35:59.370 "security": 0, 00:35:59.370 "format": 1, 00:35:59.370 "firmware": 1, 00:35:59.370 "ns_manage": 1 00:35:59.370 }, 00:35:59.370 "multi_ctrlr": false, 00:35:59.370 "ana_reporting": false 00:35:59.370 }, 00:35:59.370 "vs": { 00:35:59.370 "nvme_version": "1.2" 00:35:59.370 }, 00:35:59.370 "ns_data": { 00:35:59.370 "id": 1, 00:35:59.370 "can_share": false 00:35:59.370 } 00:35:59.370 } 00:35:59.370 ], 00:35:59.370 "mp_policy": "active_passive" 00:35:59.370 } 00:35:59.370 } 00:35:59.370 ] 00:35:59.370 11:18:06 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:35:59.370 11:18:06 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:35:59.649 [2024-07-25 11:18:06.640806] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d3e0 PMD being used: compress_qat 00:36:01.025 8163483a-ec88-4f1c-85f3-9ec61ac1e1f4 00:36:01.025 11:18:07 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:36:01.025 bd449e6e-726e-4439-b3d7-e20e5b984303 00:36:01.025 11:18:07 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:36:01.025 11:18:07 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:36:01.025 11:18:07 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:01.025 11:18:07 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:36:01.025 11:18:07 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:01.025 11:18:07 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:01.025 11:18:07 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:01.283 11:18:08 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:36:01.283 [ 00:36:01.283 { 00:36:01.283 "name": "bd449e6e-726e-4439-b3d7-e20e5b984303", 00:36:01.283 "aliases": [ 00:36:01.283 "lvs0/lv0" 00:36:01.283 ], 00:36:01.283 "product_name": "Logical Volume", 00:36:01.283 "block_size": 512, 00:36:01.283 "num_blocks": 204800, 00:36:01.283 "uuid": "bd449e6e-726e-4439-b3d7-e20e5b984303", 00:36:01.283 "assigned_rate_limits": { 00:36:01.283 "rw_ios_per_sec": 0, 00:36:01.283 "rw_mbytes_per_sec": 0, 00:36:01.283 "r_mbytes_per_sec": 0, 00:36:01.283 "w_mbytes_per_sec": 0 00:36:01.283 }, 00:36:01.283 "claimed": false, 00:36:01.283 "zoned": false, 00:36:01.283 "supported_io_types": { 00:36:01.283 "read": true, 00:36:01.283 "write": true, 00:36:01.283 "unmap": true, 00:36:01.283 "flush": false, 00:36:01.283 "reset": true, 00:36:01.284 "nvme_admin": false, 00:36:01.284 "nvme_io": false, 00:36:01.284 "nvme_io_md": false, 00:36:01.284 "write_zeroes": true, 00:36:01.284 "zcopy": false, 00:36:01.284 "get_zone_info": false, 00:36:01.284 "zone_management": false, 00:36:01.284 "zone_append": false, 00:36:01.284 "compare": false, 00:36:01.284 "compare_and_write": false, 00:36:01.284 "abort": false, 00:36:01.284 "seek_hole": true, 00:36:01.284 "seek_data": true, 00:36:01.284 "copy": false, 00:36:01.284 "nvme_iov_md": false 00:36:01.284 }, 00:36:01.284 "driver_specific": { 00:36:01.284 "lvol": { 00:36:01.284 "lvol_store_uuid": "8163483a-ec88-4f1c-85f3-9ec61ac1e1f4", 00:36:01.284 "base_bdev": "Nvme0n1", 00:36:01.284 "thin_provision": true, 00:36:01.284 "num_allocated_clusters": 0, 00:36:01.284 "snapshot": false, 00:36:01.284 "clone": false, 00:36:01.284 "esnap_clone": false 00:36:01.284 } 00:36:01.284 } 00:36:01.284 } 00:36:01.284 ] 00:36:01.284 11:18:08 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:36:01.284 11:18:08 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:36:01.284 11:18:08 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:36:01.542 [2024-07-25 11:18:08.606456] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:36:01.542 COMP_lvs0/lv0 00:36:01.542 11:18:08 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:36:01.542 11:18:08 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:36:01.542 11:18:08 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:01.542 11:18:08 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:36:01.542 11:18:08 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:01.542 11:18:08 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:01.542 11:18:08 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:01.801 11:18:08 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:36:02.060 [ 00:36:02.060 { 00:36:02.060 "name": "COMP_lvs0/lv0", 00:36:02.060 "aliases": [ 00:36:02.060 "ab2ec3cc-fe70-505d-81d0-e9670dfa6546" 00:36:02.060 ], 00:36:02.060 "product_name": "compress", 00:36:02.060 "block_size": 512, 00:36:02.060 "num_blocks": 200704, 00:36:02.060 "uuid": "ab2ec3cc-fe70-505d-81d0-e9670dfa6546", 00:36:02.060 "assigned_rate_limits": { 00:36:02.060 "rw_ios_per_sec": 0, 00:36:02.060 "rw_mbytes_per_sec": 0, 00:36:02.060 "r_mbytes_per_sec": 0, 00:36:02.060 "w_mbytes_per_sec": 0 00:36:02.060 }, 00:36:02.060 "claimed": false, 00:36:02.060 "zoned": false, 00:36:02.060 "supported_io_types": { 00:36:02.060 "read": true, 00:36:02.060 "write": true, 00:36:02.060 "unmap": false, 00:36:02.060 "flush": false, 00:36:02.060 "reset": false, 00:36:02.060 "nvme_admin": false, 00:36:02.060 "nvme_io": false, 00:36:02.060 "nvme_io_md": false, 00:36:02.060 "write_zeroes": true, 00:36:02.060 "zcopy": false, 00:36:02.060 "get_zone_info": false, 00:36:02.060 "zone_management": false, 00:36:02.061 "zone_append": false, 00:36:02.061 "compare": false, 00:36:02.061 "compare_and_write": false, 00:36:02.061 "abort": false, 00:36:02.061 "seek_hole": false, 00:36:02.061 "seek_data": false, 00:36:02.061 "copy": false, 00:36:02.061 "nvme_iov_md": false 00:36:02.061 }, 00:36:02.061 "driver_specific": { 00:36:02.061 "compress": { 00:36:02.061 "name": "COMP_lvs0/lv0", 00:36:02.061 "base_bdev_name": "bd449e6e-726e-4439-b3d7-e20e5b984303", 00:36:02.061 "pm_path": "/tmp/pmem/017301b4-2489-4457-8e32-3ddca62ba79e" 00:36:02.061 } 00:36:02.061 } 00:36:02.061 } 00:36:02.061 ] 00:36:02.061 11:18:09 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:36:02.061 11:18:09 compress_compdev -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:36:02.061 [2024-07-25 11:18:09.173532] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e0000101e0 PMD being used: compress_qat 00:36:02.061 [2024-07-25 11:18:09.176714] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x60e00001d5a0 PMD being used: compress_qat 00:36:02.061 Running I/O for 30 seconds... 00:36:34.144 00:36:34.144 Latency(us) 00:36:34.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.144 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:36:34.144 Verification LBA range: start 0x0 length 0xc40 00:36:34.144 COMP_lvs0/lv0 : 30.01 1712.25 26.75 0.00 0.00 37157.10 455.48 32505.86 00:36:34.144 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:36:34.144 Verification LBA range: start 0xc40 length 0xc40 00:36:34.144 COMP_lvs0/lv0 : 30.01 5293.57 82.71 0.00 0.00 11982.56 314.57 24746.39 00:36:34.144 =================================================================================================================== 00:36:34.144 Total : 7005.82 109.47 0.00 0.00 18135.42 314.57 32505.86 00:36:34.144 0 00:36:34.144 11:18:39 compress_compdev -- compress/compress.sh@76 -- # destroy_vols 00:36:34.144 11:18:39 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:36:34.144 11:18:39 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:36:34.144 11:18:39 compress_compdev -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:36:34.144 11:18:39 compress_compdev -- compress/compress.sh@78 -- # killprocess 3787245 00:36:34.144 11:18:39 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 3787245 ']' 00:36:34.144 11:18:39 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 3787245 00:36:34.144 11:18:39 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:36:34.144 11:18:39 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:34.144 11:18:39 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3787245 00:36:34.144 11:18:39 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:34.144 11:18:39 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:34.144 11:18:39 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3787245' 00:36:34.144 killing process with pid 3787245 00:36:34.144 11:18:39 compress_compdev -- common/autotest_common.sh@969 -- # kill 3787245 00:36:34.144 Received shutdown signal, test time was about 30.000000 seconds 00:36:34.144 00:36:34.144 Latency(us) 00:36:34.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:34.144 =================================================================================================================== 00:36:34.144 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:34.144 11:18:39 compress_compdev -- common/autotest_common.sh@974 -- # wait 3787245 00:36:36.677 11:18:43 compress_compdev -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:36:36.677 11:18:43 compress_compdev -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:36:36.678 11:18:43 compress_compdev -- compress/compress.sh@96 -- # NET_TYPE=virt 00:36:36.678 11:18:43 compress_compdev -- compress/compress.sh@96 -- # nvmftestinit 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:36.678 11:18:43 compress_compdev -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:36.678 11:18:43 compress_compdev -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@432 -- # nvmf_veth_init 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:36:36.678 Cannot find device "nvmf_tgt_br" 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@155 -- # true 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:36:36.678 Cannot find device "nvmf_tgt_br2" 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@156 -- # true 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:36:36.678 Cannot find device "nvmf_tgt_br" 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@158 -- # true 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:36:36.678 Cannot find device "nvmf_tgt_br2" 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@159 -- # true 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:36.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@162 -- # true 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:36.678 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@163 -- # true 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:36:36.678 11:18:43 compress_compdev -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:36:36.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:36.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:36:36.937 00:36:36.937 --- 10.0.0.2 ping statistics --- 00:36:36.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.937 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:36:36.937 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:36.937 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:36:36.937 00:36:36.937 --- 10.0.0.3 ping statistics --- 00:36:36.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.937 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:36.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:36.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:36:36.937 00:36:36.937 --- 10.0.0.1 ping statistics --- 00:36:36.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:36.937 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@433 -- # return 0 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:36.937 11:18:43 compress_compdev -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:36.937 11:18:44 compress_compdev -- compress/compress.sh@97 -- # nvmfappstart -m 0x7 00:36:36.937 11:18:44 compress_compdev -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:36.937 11:18:44 compress_compdev -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:36.937 11:18:44 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:36:36.937 11:18:44 compress_compdev -- nvmf/common.sh@481 -- # nvmfpid=3795175 00:36:36.937 11:18:44 compress_compdev -- nvmf/common.sh@482 -- # waitforlisten 3795175 00:36:36.937 11:18:44 compress_compdev -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:36:36.937 11:18:44 compress_compdev -- common/autotest_common.sh@831 -- # '[' -z 3795175 ']' 00:36:36.937 11:18:44 compress_compdev -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:36.937 11:18:44 compress_compdev -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:36.937 11:18:44 compress_compdev -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:36.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:36.937 11:18:44 compress_compdev -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:36.937 11:18:44 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:36:37.196 [2024-07-25 11:18:44.134942] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:37.196 [2024-07-25 11:18:44.135059] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:01.0 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:01.1 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:01.2 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:01.3 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:01.4 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:01.5 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:01.6 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:01.7 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:02.0 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:02.1 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:02.2 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:02.3 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:02.4 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:02.5 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:02.6 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3d:02.7 cannot be used 00:36:37.196 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.196 EAL: Requested device 0000:3f:01.0 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:01.1 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:01.2 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:01.3 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:01.4 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:01.5 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:01.6 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:01.7 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:02.0 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:02.1 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:02.2 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:02.3 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:02.4 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:02.5 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:02.6 cannot be used 00:36:37.197 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:36:37.197 EAL: Requested device 0000:3f:02.7 cannot be used 00:36:37.455 [2024-07-25 11:18:44.373117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:37.715 [2024-07-25 11:18:44.668012] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:37.715 [2024-07-25 11:18:44.668064] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:37.715 [2024-07-25 11:18:44.668084] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:37.715 [2024-07-25 11:18:44.668100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:37.715 [2024-07-25 11:18:44.668117] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:37.715 [2024-07-25 11:18:44.668235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:37.715 [2024-07-25 11:18:44.668306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.715 [2024-07-25 11:18:44.668312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:38.342 11:18:45 compress_compdev -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:38.342 11:18:45 compress_compdev -- common/autotest_common.sh@864 -- # return 0 00:36:38.342 11:18:45 compress_compdev -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:38.342 11:18:45 compress_compdev -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:38.342 11:18:45 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:36:38.342 11:18:45 compress_compdev -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:38.342 11:18:45 compress_compdev -- compress/compress.sh@98 -- # trap 'nvmftestfini; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:38.342 11:18:45 compress_compdev -- compress/compress.sh@101 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -u 8192 00:36:38.631 [2024-07-25 11:18:45.446565] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:38.631 11:18:45 compress_compdev -- compress/compress.sh@102 -- # create_vols 00:36:38.631 11:18:45 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:38.631 11:18:45 compress_compdev -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:36:41.917 11:18:48 compress_compdev -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:36:41.917 11:18:48 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:36:41.917 11:18:48 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:41.918 11:18:48 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:36:41.918 11:18:48 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:41.918 11:18:48 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:41.918 11:18:48 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:41.918 11:18:48 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:36:42.177 [ 00:36:42.177 { 00:36:42.177 "name": "Nvme0n1", 00:36:42.177 "aliases": [ 00:36:42.177 "b7f761d6-6099-476e-b721-b43a5464bc86" 00:36:42.177 ], 00:36:42.177 "product_name": "NVMe disk", 00:36:42.177 "block_size": 512, 00:36:42.177 "num_blocks": 3907029168, 00:36:42.177 "uuid": "b7f761d6-6099-476e-b721-b43a5464bc86", 00:36:42.177 "assigned_rate_limits": { 00:36:42.177 "rw_ios_per_sec": 0, 00:36:42.177 "rw_mbytes_per_sec": 0, 00:36:42.177 "r_mbytes_per_sec": 0, 00:36:42.177 "w_mbytes_per_sec": 0 00:36:42.177 }, 00:36:42.177 "claimed": false, 00:36:42.177 "zoned": false, 00:36:42.177 "supported_io_types": { 00:36:42.177 "read": true, 00:36:42.177 "write": true, 00:36:42.177 "unmap": true, 00:36:42.177 "flush": true, 00:36:42.177 "reset": true, 00:36:42.177 "nvme_admin": true, 00:36:42.177 "nvme_io": true, 00:36:42.177 "nvme_io_md": false, 00:36:42.177 "write_zeroes": true, 00:36:42.177 "zcopy": false, 00:36:42.177 "get_zone_info": false, 00:36:42.177 "zone_management": false, 00:36:42.177 "zone_append": false, 00:36:42.177 "compare": false, 00:36:42.177 "compare_and_write": false, 00:36:42.177 "abort": true, 00:36:42.177 "seek_hole": false, 00:36:42.177 "seek_data": false, 00:36:42.177 "copy": false, 00:36:42.177 "nvme_iov_md": false 00:36:42.177 }, 00:36:42.177 "driver_specific": { 00:36:42.177 "nvme": [ 00:36:42.177 { 00:36:42.177 "pci_address": "0000:d8:00.0", 00:36:42.177 "trid": { 00:36:42.177 "trtype": "PCIe", 00:36:42.177 "traddr": "0000:d8:00.0" 00:36:42.177 }, 00:36:42.177 "ctrlr_data": { 00:36:42.177 "cntlid": 0, 00:36:42.177 "vendor_id": "0x8086", 00:36:42.177 "model_number": "INTEL SSDPE2KX020T8", 00:36:42.177 "serial_number": "BTLJ125505KA2P0BGN", 00:36:42.177 "firmware_revision": "VDV10170", 00:36:42.177 "oacs": { 00:36:42.177 "security": 0, 00:36:42.177 "format": 1, 00:36:42.177 "firmware": 1, 00:36:42.177 "ns_manage": 1 00:36:42.177 }, 00:36:42.177 "multi_ctrlr": false, 00:36:42.177 "ana_reporting": false 00:36:42.177 }, 00:36:42.177 "vs": { 00:36:42.177 "nvme_version": "1.2" 00:36:42.177 }, 00:36:42.177 "ns_data": { 00:36:42.177 "id": 1, 00:36:42.177 "can_share": false 00:36:42.177 } 00:36:42.177 } 00:36:42.177 ], 00:36:42.177 "mp_policy": "active_passive" 00:36:42.177 } 00:36:42.177 } 00:36:42.177 ] 00:36:42.177 11:18:49 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:36:42.177 11:18:49 compress_compdev -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:36:43.554 43556cfe-a171-4dbb-8e10-d440065f4c16 00:36:43.554 11:18:50 compress_compdev -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:36:43.554 4ebfc957-dbd8-4b1e-83e9-10c3bd6f5d23 00:36:43.554 11:18:50 compress_compdev -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:36:43.554 11:18:50 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:36:43.554 11:18:50 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:43.554 11:18:50 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:36:43.554 11:18:50 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:43.554 11:18:50 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:43.554 11:18:50 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:43.813 11:18:50 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:36:44.072 [ 00:36:44.072 { 00:36:44.072 "name": "4ebfc957-dbd8-4b1e-83e9-10c3bd6f5d23", 00:36:44.072 "aliases": [ 00:36:44.072 "lvs0/lv0" 00:36:44.072 ], 00:36:44.072 "product_name": "Logical Volume", 00:36:44.072 "block_size": 512, 00:36:44.072 "num_blocks": 204800, 00:36:44.072 "uuid": "4ebfc957-dbd8-4b1e-83e9-10c3bd6f5d23", 00:36:44.072 "assigned_rate_limits": { 00:36:44.072 "rw_ios_per_sec": 0, 00:36:44.072 "rw_mbytes_per_sec": 0, 00:36:44.072 "r_mbytes_per_sec": 0, 00:36:44.072 "w_mbytes_per_sec": 0 00:36:44.072 }, 00:36:44.072 "claimed": false, 00:36:44.072 "zoned": false, 00:36:44.072 "supported_io_types": { 00:36:44.072 "read": true, 00:36:44.072 "write": true, 00:36:44.072 "unmap": true, 00:36:44.072 "flush": false, 00:36:44.072 "reset": true, 00:36:44.072 "nvme_admin": false, 00:36:44.072 "nvme_io": false, 00:36:44.072 "nvme_io_md": false, 00:36:44.072 "write_zeroes": true, 00:36:44.072 "zcopy": false, 00:36:44.072 "get_zone_info": false, 00:36:44.072 "zone_management": false, 00:36:44.072 "zone_append": false, 00:36:44.072 "compare": false, 00:36:44.072 "compare_and_write": false, 00:36:44.072 "abort": false, 00:36:44.072 "seek_hole": true, 00:36:44.072 "seek_data": true, 00:36:44.072 "copy": false, 00:36:44.072 "nvme_iov_md": false 00:36:44.072 }, 00:36:44.072 "driver_specific": { 00:36:44.072 "lvol": { 00:36:44.072 "lvol_store_uuid": "43556cfe-a171-4dbb-8e10-d440065f4c16", 00:36:44.072 "base_bdev": "Nvme0n1", 00:36:44.072 "thin_provision": true, 00:36:44.072 "num_allocated_clusters": 0, 00:36:44.072 "snapshot": false, 00:36:44.072 "clone": false, 00:36:44.072 "esnap_clone": false 00:36:44.072 } 00:36:44.072 } 00:36:44.072 } 00:36:44.072 ] 00:36:44.072 11:18:51 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:36:44.072 11:18:51 compress_compdev -- compress/compress.sh@41 -- # '[' -z '' ']' 00:36:44.072 11:18:51 compress_compdev -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:36:44.331 [2024-07-25 11:18:51.279243] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:36:44.331 COMP_lvs0/lv0 00:36:44.331 11:18:51 compress_compdev -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:36:44.331 11:18:51 compress_compdev -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:36:44.331 11:18:51 compress_compdev -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:36:44.331 11:18:51 compress_compdev -- common/autotest_common.sh@901 -- # local i 00:36:44.331 11:18:51 compress_compdev -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:36:44.331 11:18:51 compress_compdev -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:36:44.331 11:18:51 compress_compdev -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:36:44.590 11:18:51 compress_compdev -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:36:44.849 [ 00:36:44.849 { 00:36:44.849 "name": "COMP_lvs0/lv0", 00:36:44.849 "aliases": [ 00:36:44.849 "50fbf1ef-24bc-5004-b2b2-4fa56e0c9852" 00:36:44.849 ], 00:36:44.849 "product_name": "compress", 00:36:44.849 "block_size": 512, 00:36:44.849 "num_blocks": 200704, 00:36:44.849 "uuid": "50fbf1ef-24bc-5004-b2b2-4fa56e0c9852", 00:36:44.849 "assigned_rate_limits": { 00:36:44.849 "rw_ios_per_sec": 0, 00:36:44.849 "rw_mbytes_per_sec": 0, 00:36:44.849 "r_mbytes_per_sec": 0, 00:36:44.849 "w_mbytes_per_sec": 0 00:36:44.849 }, 00:36:44.849 "claimed": false, 00:36:44.849 "zoned": false, 00:36:44.849 "supported_io_types": { 00:36:44.849 "read": true, 00:36:44.849 "write": true, 00:36:44.849 "unmap": false, 00:36:44.849 "flush": false, 00:36:44.849 "reset": false, 00:36:44.849 "nvme_admin": false, 00:36:44.849 "nvme_io": false, 00:36:44.849 "nvme_io_md": false, 00:36:44.849 "write_zeroes": true, 00:36:44.849 "zcopy": false, 00:36:44.849 "get_zone_info": false, 00:36:44.849 "zone_management": false, 00:36:44.849 "zone_append": false, 00:36:44.849 "compare": false, 00:36:44.849 "compare_and_write": false, 00:36:44.849 "abort": false, 00:36:44.849 "seek_hole": false, 00:36:44.849 "seek_data": false, 00:36:44.849 "copy": false, 00:36:44.849 "nvme_iov_md": false 00:36:44.849 }, 00:36:44.849 "driver_specific": { 00:36:44.849 "compress": { 00:36:44.849 "name": "COMP_lvs0/lv0", 00:36:44.849 "base_bdev_name": "4ebfc957-dbd8-4b1e-83e9-10c3bd6f5d23", 00:36:44.849 "pm_path": "/tmp/pmem/6b83fa2f-27f6-4e29-acb8-c00518f0f635" 00:36:44.849 } 00:36:44.849 } 00:36:44.849 } 00:36:44.849 ] 00:36:44.849 11:18:51 compress_compdev -- common/autotest_common.sh@907 -- # return 0 00:36:44.849 11:18:51 compress_compdev -- compress/compress.sh@103 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:45.109 11:18:51 compress_compdev -- compress/compress.sh@104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 COMP_lvs0/lv0 00:36:45.109 11:18:52 compress_compdev -- compress/compress.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:45.368 [2024-07-25 11:18:52.408276] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:45.368 11:18:52 compress_compdev -- compress/compress.sh@108 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 64 -s 512 -w randrw -t 30 -c 0x18 -M 50 00:36:45.368 11:18:52 compress_compdev -- compress/compress.sh@109 -- # perf_pid=3796536 00:36:45.368 11:18:52 compress_compdev -- compress/compress.sh@112 -- # trap 'killprocess $perf_pid; compress_err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:45.368 11:18:52 compress_compdev -- compress/compress.sh@113 -- # wait 3796536 00:36:45.627 [2024-07-25 11:18:52.725370] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:17.712 Initializing NVMe Controllers 00:37:17.712 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:17.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:37:17.712 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:37:17.712 Initialization complete. Launching workers. 00:37:17.712 ======================================================== 00:37:17.712 Latency(us) 00:37:17.712 Device Information : IOPS MiB/s Average min max 00:37:17.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 4484.93 17.52 14272.43 1883.54 40370.46 00:37:17.712 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 2806.27 10.96 22810.35 4357.85 41267.63 00:37:17.712 ======================================================== 00:37:17.712 Total : 7291.20 28.48 17558.54 1883.54 41267.63 00:37:17.712 00:37:17.712 11:19:22 compress_compdev -- compress/compress.sh@114 -- # destroy_vols 00:37:17.712 11:19:22 compress_compdev -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:37:17.712 11:19:23 compress_compdev -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:37:17.712 11:19:23 compress_compdev -- compress/compress.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:37:17.712 11:19:23 compress_compdev -- compress/compress.sh@117 -- # nvmftestfini 00:37:17.712 11:19:23 compress_compdev -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:17.712 11:19:23 compress_compdev -- nvmf/common.sh@117 -- # sync 00:37:17.712 11:19:23 compress_compdev -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:17.712 11:19:23 compress_compdev -- nvmf/common.sh@120 -- # set +e 00:37:17.713 11:19:23 compress_compdev -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:17.713 11:19:23 compress_compdev -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:17.713 rmmod nvme_tcp 00:37:17.713 rmmod nvme_fabrics 00:37:17.713 rmmod nvme_keyring 00:37:17.713 11:19:23 compress_compdev -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:17.713 11:19:23 compress_compdev -- nvmf/common.sh@124 -- # set -e 00:37:17.713 11:19:23 compress_compdev -- nvmf/common.sh@125 -- # return 0 00:37:17.713 11:19:23 compress_compdev -- nvmf/common.sh@489 -- # '[' -n 3795175 ']' 00:37:17.713 11:19:23 compress_compdev -- nvmf/common.sh@490 -- # killprocess 3795175 00:37:17.713 11:19:23 compress_compdev -- common/autotest_common.sh@950 -- # '[' -z 3795175 ']' 00:37:17.713 11:19:23 compress_compdev -- common/autotest_common.sh@954 -- # kill -0 3795175 00:37:17.713 11:19:23 compress_compdev -- common/autotest_common.sh@955 -- # uname 00:37:17.713 11:19:23 compress_compdev -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:17.713 11:19:23 compress_compdev -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3795175 00:37:17.713 11:19:23 compress_compdev -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:37:17.713 11:19:23 compress_compdev -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:37:17.713 11:19:23 compress_compdev -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3795175' 00:37:17.713 killing process with pid 3795175 00:37:17.713 11:19:23 compress_compdev -- common/autotest_common.sh@969 -- # kill 3795175 00:37:17.713 11:19:23 compress_compdev -- common/autotest_common.sh@974 -- # wait 3795175 00:37:20.249 11:19:27 compress_compdev -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:37:20.249 11:19:27 compress_compdev -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:20.249 11:19:27 compress_compdev -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:20.249 11:19:27 compress_compdev -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:20.249 11:19:27 compress_compdev -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:20.249 11:19:27 compress_compdev -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:20.249 11:19:27 compress_compdev -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:20.249 11:19:27 compress_compdev -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:20.249 11:19:27 compress_compdev -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:20.249 11:19:27 compress_compdev -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:37:20.249 00:37:20.249 real 2m29.397s 00:37:20.249 user 6m35.648s 00:37:20.249 sys 0m22.386s 00:37:20.249 11:19:27 compress_compdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:37:20.249 11:19:27 compress_compdev -- common/autotest_common.sh@10 -- # set +x 00:37:20.249 ************************************ 00:37:20.249 END TEST compress_compdev 00:37:20.249 ************************************ 00:37:20.249 11:19:27 -- spdk/autotest.sh@353 -- # run_test compress_isal /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:37:20.249 11:19:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:37:20.249 11:19:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:37:20.249 11:19:27 -- common/autotest_common.sh@10 -- # set +x 00:37:20.249 ************************************ 00:37:20.249 START TEST compress_isal 00:37:20.249 ************************************ 00:37:20.249 11:19:27 compress_isal -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh isal 00:37:20.509 * Looking for test storage... 00:37:20.509 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:37:20.509 11:19:27 compress_isal -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@7 -- # uname -s 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:20.509 11:19:27 compress_isal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:37:20.509 11:19:27 compress_isal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:20.509 11:19:27 compress_isal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.509 11:19:27 compress_isal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.510 11:19:27 compress_isal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.510 11:19:27 compress_isal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.510 11:19:27 compress_isal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.510 11:19:27 compress_isal -- paths/export.sh@5 -- # export PATH 00:37:20.510 11:19:27 compress_isal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.510 11:19:27 compress_isal -- nvmf/common.sh@47 -- # : 0 00:37:20.510 11:19:27 compress_isal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:20.510 11:19:27 compress_isal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:20.510 11:19:27 compress_isal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:20.510 11:19:27 compress_isal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:20.510 11:19:27 compress_isal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:20.510 11:19:27 compress_isal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:20.510 11:19:27 compress_isal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:20.510 11:19:27 compress_isal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:20.510 11:19:27 compress_isal -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:37:20.510 11:19:27 compress_isal -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:37:20.510 11:19:27 compress_isal -- compress/compress.sh@82 -- # test_type=isal 00:37:20.510 11:19:27 compress_isal -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:37:20.510 11:19:27 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:37:20.510 11:19:27 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=3802294 00:37:20.510 11:19:27 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:20.510 11:19:27 compress_isal -- compress/compress.sh@73 -- # waitforlisten 3802294 00:37:20.510 11:19:27 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 3802294 ']' 00:37:20.510 11:19:27 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:20.510 11:19:27 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:37:20.510 11:19:27 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:20.510 11:19:27 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:20.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:20.510 11:19:27 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:20.510 11:19:27 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:20.510 [2024-07-25 11:19:27.616092] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:20.510 [2024-07-25 11:19:27.616241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3802294 ] 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:20.770 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:20.770 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:20.770 [2024-07-25 11:19:27.831705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:21.029 [2024-07-25 11:19:28.095752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.029 [2024-07-25 11:19:28.095754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:21.669 11:19:28 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:21.669 11:19:28 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:37:21.669 11:19:28 compress_isal -- compress/compress.sh@74 -- # create_vols 00:37:21.669 11:19:28 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:21.669 11:19:28 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:37:24.957 11:19:31 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:37:24.957 11:19:31 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:37:24.957 11:19:31 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:24.957 11:19:31 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:24.957 11:19:31 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:24.957 11:19:31 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:24.957 11:19:31 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:24.957 11:19:31 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:37:25.216 [ 00:37:25.216 { 00:37:25.216 "name": "Nvme0n1", 00:37:25.216 "aliases": [ 00:37:25.216 "b805ef05-cbce-472e-93e2-dc8c2ba481cf" 00:37:25.216 ], 00:37:25.216 "product_name": "NVMe disk", 00:37:25.216 "block_size": 512, 00:37:25.216 "num_blocks": 3907029168, 00:37:25.216 "uuid": "b805ef05-cbce-472e-93e2-dc8c2ba481cf", 00:37:25.216 "assigned_rate_limits": { 00:37:25.216 "rw_ios_per_sec": 0, 00:37:25.216 "rw_mbytes_per_sec": 0, 00:37:25.216 "r_mbytes_per_sec": 0, 00:37:25.216 "w_mbytes_per_sec": 0 00:37:25.216 }, 00:37:25.216 "claimed": false, 00:37:25.216 "zoned": false, 00:37:25.216 "supported_io_types": { 00:37:25.216 "read": true, 00:37:25.216 "write": true, 00:37:25.216 "unmap": true, 00:37:25.216 "flush": true, 00:37:25.216 "reset": true, 00:37:25.216 "nvme_admin": true, 00:37:25.216 "nvme_io": true, 00:37:25.216 "nvme_io_md": false, 00:37:25.216 "write_zeroes": true, 00:37:25.216 "zcopy": false, 00:37:25.216 "get_zone_info": false, 00:37:25.216 "zone_management": false, 00:37:25.216 "zone_append": false, 00:37:25.216 "compare": false, 00:37:25.216 "compare_and_write": false, 00:37:25.216 "abort": true, 00:37:25.216 "seek_hole": false, 00:37:25.216 "seek_data": false, 00:37:25.216 "copy": false, 00:37:25.216 "nvme_iov_md": false 00:37:25.216 }, 00:37:25.216 "driver_specific": { 00:37:25.216 "nvme": [ 00:37:25.216 { 00:37:25.216 "pci_address": "0000:d8:00.0", 00:37:25.216 "trid": { 00:37:25.216 "trtype": "PCIe", 00:37:25.216 "traddr": "0000:d8:00.0" 00:37:25.216 }, 00:37:25.216 "ctrlr_data": { 00:37:25.216 "cntlid": 0, 00:37:25.216 "vendor_id": "0x8086", 00:37:25.216 "model_number": "INTEL SSDPE2KX020T8", 00:37:25.216 "serial_number": "BTLJ125505KA2P0BGN", 00:37:25.216 "firmware_revision": "VDV10170", 00:37:25.216 "oacs": { 00:37:25.216 "security": 0, 00:37:25.216 "format": 1, 00:37:25.216 "firmware": 1, 00:37:25.216 "ns_manage": 1 00:37:25.216 }, 00:37:25.216 "multi_ctrlr": false, 00:37:25.216 "ana_reporting": false 00:37:25.216 }, 00:37:25.216 "vs": { 00:37:25.216 "nvme_version": "1.2" 00:37:25.216 }, 00:37:25.216 "ns_data": { 00:37:25.216 "id": 1, 00:37:25.216 "can_share": false 00:37:25.216 } 00:37:25.216 } 00:37:25.216 ], 00:37:25.216 "mp_policy": "active_passive" 00:37:25.216 } 00:37:25.216 } 00:37:25.216 ] 00:37:25.216 11:19:32 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:25.217 11:19:32 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:37:26.592 27d5ac6d-4e51-4602-b9ec-4cb93acc356c 00:37:26.592 11:19:33 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:37:26.592 bb54854c-5dd1-4fb6-be2b-7a38ce5a5f0f 00:37:26.592 11:19:33 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:37:26.592 11:19:33 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:37:26.592 11:19:33 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:26.592 11:19:33 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:26.592 11:19:33 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:26.592 11:19:33 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:26.592 11:19:33 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:26.851 11:19:33 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:37:27.111 [ 00:37:27.111 { 00:37:27.111 "name": "bb54854c-5dd1-4fb6-be2b-7a38ce5a5f0f", 00:37:27.111 "aliases": [ 00:37:27.111 "lvs0/lv0" 00:37:27.111 ], 00:37:27.111 "product_name": "Logical Volume", 00:37:27.111 "block_size": 512, 00:37:27.111 "num_blocks": 204800, 00:37:27.111 "uuid": "bb54854c-5dd1-4fb6-be2b-7a38ce5a5f0f", 00:37:27.111 "assigned_rate_limits": { 00:37:27.111 "rw_ios_per_sec": 0, 00:37:27.111 "rw_mbytes_per_sec": 0, 00:37:27.111 "r_mbytes_per_sec": 0, 00:37:27.111 "w_mbytes_per_sec": 0 00:37:27.111 }, 00:37:27.111 "claimed": false, 00:37:27.111 "zoned": false, 00:37:27.111 "supported_io_types": { 00:37:27.111 "read": true, 00:37:27.111 "write": true, 00:37:27.111 "unmap": true, 00:37:27.111 "flush": false, 00:37:27.111 "reset": true, 00:37:27.111 "nvme_admin": false, 00:37:27.111 "nvme_io": false, 00:37:27.111 "nvme_io_md": false, 00:37:27.111 "write_zeroes": true, 00:37:27.111 "zcopy": false, 00:37:27.111 "get_zone_info": false, 00:37:27.111 "zone_management": false, 00:37:27.111 "zone_append": false, 00:37:27.111 "compare": false, 00:37:27.111 "compare_and_write": false, 00:37:27.111 "abort": false, 00:37:27.111 "seek_hole": true, 00:37:27.111 "seek_data": true, 00:37:27.111 "copy": false, 00:37:27.111 "nvme_iov_md": false 00:37:27.111 }, 00:37:27.111 "driver_specific": { 00:37:27.111 "lvol": { 00:37:27.111 "lvol_store_uuid": "27d5ac6d-4e51-4602-b9ec-4cb93acc356c", 00:37:27.111 "base_bdev": "Nvme0n1", 00:37:27.111 "thin_provision": true, 00:37:27.111 "num_allocated_clusters": 0, 00:37:27.111 "snapshot": false, 00:37:27.111 "clone": false, 00:37:27.111 "esnap_clone": false 00:37:27.111 } 00:37:27.112 } 00:37:27.112 } 00:37:27.112 ] 00:37:27.112 11:19:34 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:27.112 11:19:34 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:37:27.112 11:19:34 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:37:27.370 [2024-07-25 11:19:34.330849] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:37:27.370 COMP_lvs0/lv0 00:37:27.370 11:19:34 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:37:27.370 11:19:34 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:37:27.370 11:19:34 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:27.370 11:19:34 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:27.370 11:19:34 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:27.370 11:19:34 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:27.370 11:19:34 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:27.629 11:19:34 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:37:27.888 [ 00:37:27.888 { 00:37:27.888 "name": "COMP_lvs0/lv0", 00:37:27.888 "aliases": [ 00:37:27.888 "50ed4788-afbc-5535-b687-4d7781c0dffd" 00:37:27.888 ], 00:37:27.888 "product_name": "compress", 00:37:27.888 "block_size": 512, 00:37:27.888 "num_blocks": 200704, 00:37:27.888 "uuid": "50ed4788-afbc-5535-b687-4d7781c0dffd", 00:37:27.888 "assigned_rate_limits": { 00:37:27.888 "rw_ios_per_sec": 0, 00:37:27.888 "rw_mbytes_per_sec": 0, 00:37:27.888 "r_mbytes_per_sec": 0, 00:37:27.888 "w_mbytes_per_sec": 0 00:37:27.888 }, 00:37:27.888 "claimed": false, 00:37:27.888 "zoned": false, 00:37:27.888 "supported_io_types": { 00:37:27.888 "read": true, 00:37:27.888 "write": true, 00:37:27.888 "unmap": false, 00:37:27.888 "flush": false, 00:37:27.888 "reset": false, 00:37:27.888 "nvme_admin": false, 00:37:27.888 "nvme_io": false, 00:37:27.888 "nvme_io_md": false, 00:37:27.888 "write_zeroes": true, 00:37:27.888 "zcopy": false, 00:37:27.888 "get_zone_info": false, 00:37:27.888 "zone_management": false, 00:37:27.888 "zone_append": false, 00:37:27.888 "compare": false, 00:37:27.888 "compare_and_write": false, 00:37:27.888 "abort": false, 00:37:27.888 "seek_hole": false, 00:37:27.888 "seek_data": false, 00:37:27.888 "copy": false, 00:37:27.888 "nvme_iov_md": false 00:37:27.888 }, 00:37:27.888 "driver_specific": { 00:37:27.888 "compress": { 00:37:27.888 "name": "COMP_lvs0/lv0", 00:37:27.888 "base_bdev_name": "bb54854c-5dd1-4fb6-be2b-7a38ce5a5f0f", 00:37:27.888 "pm_path": "/tmp/pmem/6dc18dbe-f6c7-4944-9d49-6a47e56218ae" 00:37:27.888 } 00:37:27.888 } 00:37:27.888 } 00:37:27.888 ] 00:37:27.888 11:19:34 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:27.888 11:19:34 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:27.888 Running I/O for 3 seconds... 00:37:31.178 00:37:31.178 Latency(us) 00:37:31.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.178 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:37:31.178 Verification LBA range: start 0x0 length 0x3100 00:37:31.178 COMP_lvs0/lv0 : 3.01 3271.74 12.78 0.00 0.00 9722.73 58.98 14155.78 00:37:31.178 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:37:31.178 Verification LBA range: start 0x3100 length 0x3100 00:37:31.178 COMP_lvs0/lv0 : 3.01 3265.67 12.76 0.00 0.00 9748.58 60.62 14784.92 00:37:31.178 =================================================================================================================== 00:37:31.178 Total : 6537.41 25.54 0.00 0.00 9735.64 58.98 14784.92 00:37:31.178 0 00:37:31.178 11:19:37 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:37:31.178 11:19:37 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:37:31.178 11:19:38 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:37:31.437 11:19:38 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:37:31.438 11:19:38 compress_isal -- compress/compress.sh@78 -- # killprocess 3802294 00:37:31.438 11:19:38 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 3802294 ']' 00:37:31.438 11:19:38 compress_isal -- common/autotest_common.sh@954 -- # kill -0 3802294 00:37:31.438 11:19:38 compress_isal -- common/autotest_common.sh@955 -- # uname 00:37:31.438 11:19:38 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:31.438 11:19:38 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3802294 00:37:31.438 11:19:38 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:31.438 11:19:38 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:31.438 11:19:38 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3802294' 00:37:31.438 killing process with pid 3802294 00:37:31.438 11:19:38 compress_isal -- common/autotest_common.sh@969 -- # kill 3802294 00:37:31.438 Received shutdown signal, test time was about 3.000000 seconds 00:37:31.438 00:37:31.438 Latency(us) 00:37:31.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:31.438 =================================================================================================================== 00:37:31.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:31.438 11:19:38 compress_isal -- common/autotest_common.sh@974 -- # wait 3802294 00:37:35.632 11:19:42 compress_isal -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:37:35.632 11:19:42 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:37:35.632 11:19:42 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=3804760 00:37:35.632 11:19:42 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:35.632 11:19:42 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:37:35.632 11:19:42 compress_isal -- compress/compress.sh@73 -- # waitforlisten 3804760 00:37:35.632 11:19:42 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 3804760 ']' 00:37:35.632 11:19:42 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:35.632 11:19:42 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:35.632 11:19:42 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:35.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:35.632 11:19:42 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:35.632 11:19:42 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:35.632 [2024-07-25 11:19:42.551028] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:35.632 [2024-07-25 11:19:42.551154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3804760 ] 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:35.632 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:35.632 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:35.891 [2024-07-25 11:19:42.763887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:36.150 [2024-07-25 11:19:43.039110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:36.150 [2024-07-25 11:19:43.039115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:36.411 11:19:43 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:36.411 11:19:43 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:37:36.411 11:19:43 compress_isal -- compress/compress.sh@74 -- # create_vols 512 00:37:36.411 11:19:43 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:36.411 11:19:43 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:37:39.701 11:19:46 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:37:39.701 11:19:46 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:37:39.701 11:19:46 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:39.701 11:19:46 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:39.701 11:19:46 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:39.701 11:19:46 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:39.701 11:19:46 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:39.960 11:19:46 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:37:40.219 [ 00:37:40.219 { 00:37:40.219 "name": "Nvme0n1", 00:37:40.219 "aliases": [ 00:37:40.219 "cf1629d5-5ac2-4c00-bcc3-c7c5b076137f" 00:37:40.219 ], 00:37:40.219 "product_name": "NVMe disk", 00:37:40.219 "block_size": 512, 00:37:40.219 "num_blocks": 3907029168, 00:37:40.219 "uuid": "cf1629d5-5ac2-4c00-bcc3-c7c5b076137f", 00:37:40.219 "assigned_rate_limits": { 00:37:40.219 "rw_ios_per_sec": 0, 00:37:40.219 "rw_mbytes_per_sec": 0, 00:37:40.219 "r_mbytes_per_sec": 0, 00:37:40.219 "w_mbytes_per_sec": 0 00:37:40.219 }, 00:37:40.219 "claimed": false, 00:37:40.219 "zoned": false, 00:37:40.219 "supported_io_types": { 00:37:40.219 "read": true, 00:37:40.219 "write": true, 00:37:40.219 "unmap": true, 00:37:40.219 "flush": true, 00:37:40.219 "reset": true, 00:37:40.219 "nvme_admin": true, 00:37:40.219 "nvme_io": true, 00:37:40.219 "nvme_io_md": false, 00:37:40.219 "write_zeroes": true, 00:37:40.219 "zcopy": false, 00:37:40.219 "get_zone_info": false, 00:37:40.219 "zone_management": false, 00:37:40.219 "zone_append": false, 00:37:40.219 "compare": false, 00:37:40.219 "compare_and_write": false, 00:37:40.219 "abort": true, 00:37:40.219 "seek_hole": false, 00:37:40.219 "seek_data": false, 00:37:40.219 "copy": false, 00:37:40.219 "nvme_iov_md": false 00:37:40.219 }, 00:37:40.219 "driver_specific": { 00:37:40.219 "nvme": [ 00:37:40.219 { 00:37:40.219 "pci_address": "0000:d8:00.0", 00:37:40.219 "trid": { 00:37:40.219 "trtype": "PCIe", 00:37:40.219 "traddr": "0000:d8:00.0" 00:37:40.219 }, 00:37:40.219 "ctrlr_data": { 00:37:40.219 "cntlid": 0, 00:37:40.219 "vendor_id": "0x8086", 00:37:40.219 "model_number": "INTEL SSDPE2KX020T8", 00:37:40.219 "serial_number": "BTLJ125505KA2P0BGN", 00:37:40.219 "firmware_revision": "VDV10170", 00:37:40.219 "oacs": { 00:37:40.219 "security": 0, 00:37:40.219 "format": 1, 00:37:40.219 "firmware": 1, 00:37:40.219 "ns_manage": 1 00:37:40.219 }, 00:37:40.219 "multi_ctrlr": false, 00:37:40.219 "ana_reporting": false 00:37:40.219 }, 00:37:40.219 "vs": { 00:37:40.219 "nvme_version": "1.2" 00:37:40.219 }, 00:37:40.219 "ns_data": { 00:37:40.219 "id": 1, 00:37:40.219 "can_share": false 00:37:40.219 } 00:37:40.219 } 00:37:40.219 ], 00:37:40.219 "mp_policy": "active_passive" 00:37:40.219 } 00:37:40.219 } 00:37:40.219 ] 00:37:40.219 11:19:47 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:40.219 11:19:47 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:37:41.613 510fa54e-1de7-4271-aff7-4368db5ce412 00:37:41.613 11:19:48 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:37:41.898 13bafa94-6f95-404d-8e18-8e44b75c1edf 00:37:41.898 11:19:48 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:37:41.898 11:19:48 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:37:41.898 11:19:48 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:41.898 11:19:48 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:41.898 11:19:48 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:41.898 11:19:48 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:41.898 11:19:48 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:42.158 11:19:49 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:37:42.417 [ 00:37:42.417 { 00:37:42.417 "name": "13bafa94-6f95-404d-8e18-8e44b75c1edf", 00:37:42.417 "aliases": [ 00:37:42.417 "lvs0/lv0" 00:37:42.417 ], 00:37:42.417 "product_name": "Logical Volume", 00:37:42.417 "block_size": 512, 00:37:42.417 "num_blocks": 204800, 00:37:42.417 "uuid": "13bafa94-6f95-404d-8e18-8e44b75c1edf", 00:37:42.417 "assigned_rate_limits": { 00:37:42.417 "rw_ios_per_sec": 0, 00:37:42.417 "rw_mbytes_per_sec": 0, 00:37:42.417 "r_mbytes_per_sec": 0, 00:37:42.417 "w_mbytes_per_sec": 0 00:37:42.417 }, 00:37:42.417 "claimed": false, 00:37:42.417 "zoned": false, 00:37:42.417 "supported_io_types": { 00:37:42.417 "read": true, 00:37:42.417 "write": true, 00:37:42.417 "unmap": true, 00:37:42.417 "flush": false, 00:37:42.417 "reset": true, 00:37:42.417 "nvme_admin": false, 00:37:42.417 "nvme_io": false, 00:37:42.417 "nvme_io_md": false, 00:37:42.417 "write_zeroes": true, 00:37:42.417 "zcopy": false, 00:37:42.417 "get_zone_info": false, 00:37:42.417 "zone_management": false, 00:37:42.417 "zone_append": false, 00:37:42.417 "compare": false, 00:37:42.417 "compare_and_write": false, 00:37:42.417 "abort": false, 00:37:42.417 "seek_hole": true, 00:37:42.417 "seek_data": true, 00:37:42.417 "copy": false, 00:37:42.417 "nvme_iov_md": false 00:37:42.417 }, 00:37:42.417 "driver_specific": { 00:37:42.417 "lvol": { 00:37:42.417 "lvol_store_uuid": "510fa54e-1de7-4271-aff7-4368db5ce412", 00:37:42.417 "base_bdev": "Nvme0n1", 00:37:42.417 "thin_provision": true, 00:37:42.417 "num_allocated_clusters": 0, 00:37:42.417 "snapshot": false, 00:37:42.417 "clone": false, 00:37:42.417 "esnap_clone": false 00:37:42.417 } 00:37:42.417 } 00:37:42.417 } 00:37:42.417 ] 00:37:42.417 11:19:49 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:42.417 11:19:49 compress_isal -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:37:42.417 11:19:49 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:37:42.676 [2024-07-25 11:19:49.585539] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:37:42.676 COMP_lvs0/lv0 00:37:42.676 11:19:49 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:37:42.676 11:19:49 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:37:42.676 11:19:49 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:42.676 11:19:49 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:42.676 11:19:49 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:42.676 11:19:49 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:42.676 11:19:49 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:42.936 11:19:49 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:37:42.936 [ 00:37:42.936 { 00:37:42.936 "name": "COMP_lvs0/lv0", 00:37:42.936 "aliases": [ 00:37:42.936 "a891bbb3-0dac-564f-ac1f-4c9b807272d5" 00:37:42.936 ], 00:37:42.936 "product_name": "compress", 00:37:42.936 "block_size": 512, 00:37:42.936 "num_blocks": 200704, 00:37:42.936 "uuid": "a891bbb3-0dac-564f-ac1f-4c9b807272d5", 00:37:42.936 "assigned_rate_limits": { 00:37:42.936 "rw_ios_per_sec": 0, 00:37:42.936 "rw_mbytes_per_sec": 0, 00:37:42.936 "r_mbytes_per_sec": 0, 00:37:42.936 "w_mbytes_per_sec": 0 00:37:42.936 }, 00:37:42.936 "claimed": false, 00:37:42.936 "zoned": false, 00:37:42.936 "supported_io_types": { 00:37:42.936 "read": true, 00:37:42.936 "write": true, 00:37:42.936 "unmap": false, 00:37:42.936 "flush": false, 00:37:42.936 "reset": false, 00:37:42.936 "nvme_admin": false, 00:37:42.936 "nvme_io": false, 00:37:42.936 "nvme_io_md": false, 00:37:42.936 "write_zeroes": true, 00:37:42.936 "zcopy": false, 00:37:42.936 "get_zone_info": false, 00:37:42.936 "zone_management": false, 00:37:42.936 "zone_append": false, 00:37:42.936 "compare": false, 00:37:42.936 "compare_and_write": false, 00:37:42.936 "abort": false, 00:37:42.936 "seek_hole": false, 00:37:42.936 "seek_data": false, 00:37:42.936 "copy": false, 00:37:42.936 "nvme_iov_md": false 00:37:42.936 }, 00:37:42.936 "driver_specific": { 00:37:42.936 "compress": { 00:37:42.936 "name": "COMP_lvs0/lv0", 00:37:42.936 "base_bdev_name": "13bafa94-6f95-404d-8e18-8e44b75c1edf", 00:37:42.936 "pm_path": "/tmp/pmem/312ae85f-3846-4b3c-8a8a-af2609eb27d4" 00:37:42.936 } 00:37:42.936 } 00:37:42.936 } 00:37:42.936 ] 00:37:42.936 11:19:50 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:42.936 11:19:50 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:43.195 Running I/O for 3 seconds... 00:37:46.481 00:37:46.481 Latency(us) 00:37:46.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.481 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:37:46.481 Verification LBA range: start 0x0 length 0x3100 00:37:46.481 COMP_lvs0/lv0 : 3.01 3227.65 12.61 0.00 0.00 9853.29 63.08 15623.78 00:37:46.481 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:37:46.481 Verification LBA range: start 0x3100 length 0x3100 00:37:46.481 COMP_lvs0/lv0 : 3.01 3257.79 12.73 0.00 0.00 9773.16 61.03 16357.79 00:37:46.481 =================================================================================================================== 00:37:46.481 Total : 6485.45 25.33 0.00 0.00 9813.04 61.03 16357.79 00:37:46.481 0 00:37:46.481 11:19:53 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:37:46.481 11:19:53 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:37:46.481 11:19:53 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:37:46.740 11:19:53 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:37:46.740 11:19:53 compress_isal -- compress/compress.sh@78 -- # killprocess 3804760 00:37:46.740 11:19:53 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 3804760 ']' 00:37:46.740 11:19:53 compress_isal -- common/autotest_common.sh@954 -- # kill -0 3804760 00:37:46.740 11:19:53 compress_isal -- common/autotest_common.sh@955 -- # uname 00:37:46.740 11:19:53 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:37:46.740 11:19:53 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3804760 00:37:46.740 11:19:53 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:37:46.740 11:19:53 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:37:46.740 11:19:53 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3804760' 00:37:46.740 killing process with pid 3804760 00:37:46.740 11:19:53 compress_isal -- common/autotest_common.sh@969 -- # kill 3804760 00:37:46.740 Received shutdown signal, test time was about 3.000000 seconds 00:37:46.740 00:37:46.740 Latency(us) 00:37:46.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:46.740 =================================================================================================================== 00:37:46.740 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:46.740 11:19:53 compress_isal -- common/autotest_common.sh@974 -- # wait 3804760 00:37:50.930 11:19:57 compress_isal -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:37:50.930 11:19:57 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:37:50.930 11:19:57 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=3807162 00:37:50.930 11:19:57 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:50.930 11:19:57 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 00:37:50.930 11:19:57 compress_isal -- compress/compress.sh@73 -- # waitforlisten 3807162 00:37:50.930 11:19:57 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 3807162 ']' 00:37:50.930 11:19:57 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:50.930 11:19:57 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:37:50.930 11:19:57 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:50.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:50.930 11:19:57 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:37:50.930 11:19:57 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:37:50.930 [2024-07-25 11:19:57.771498] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:37:50.930 [2024-07-25 11:19:57.771627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807162 ] 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:01.0 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:01.1 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:01.2 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:01.3 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:01.4 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:01.5 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:01.6 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:01.7 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:02.0 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:02.1 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:02.2 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:02.3 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:02.4 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:02.5 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:02.6 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3d:02.7 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3f:01.0 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3f:01.1 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3f:01.2 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3f:01.3 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3f:01.4 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3f:01.5 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3f:01.6 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3f:01.7 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3f:02.0 cannot be used 00:37:50.930 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.930 EAL: Requested device 0000:3f:02.1 cannot be used 00:37:50.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.931 EAL: Requested device 0000:3f:02.2 cannot be used 00:37:50.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.931 EAL: Requested device 0000:3f:02.3 cannot be used 00:37:50.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.931 EAL: Requested device 0000:3f:02.4 cannot be used 00:37:50.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.931 EAL: Requested device 0000:3f:02.5 cannot be used 00:37:50.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.931 EAL: Requested device 0000:3f:02.6 cannot be used 00:37:50.931 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:37:50.931 EAL: Requested device 0000:3f:02.7 cannot be used 00:37:50.931 [2024-07-25 11:19:57.986655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:51.189 [2024-07-25 11:19:58.266408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.189 [2024-07-25 11:19:58.266410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:37:51.757 11:19:58 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:37:51.757 11:19:58 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:37:51.757 11:19:58 compress_isal -- compress/compress.sh@74 -- # create_vols 4096 00:37:51.757 11:19:58 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:51.757 11:19:58 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:37:55.039 11:20:02 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:37:55.039 11:20:02 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:37:55.039 11:20:02 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:55.039 11:20:02 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:55.039 11:20:02 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:55.039 11:20:02 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:55.039 11:20:02 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:55.298 11:20:02 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:37:55.556 [ 00:37:55.556 { 00:37:55.556 "name": "Nvme0n1", 00:37:55.556 "aliases": [ 00:37:55.556 "70b4e035-18f7-40b6-853a-59c90c612dc8" 00:37:55.556 ], 00:37:55.556 "product_name": "NVMe disk", 00:37:55.556 "block_size": 512, 00:37:55.556 "num_blocks": 3907029168, 00:37:55.556 "uuid": "70b4e035-18f7-40b6-853a-59c90c612dc8", 00:37:55.556 "assigned_rate_limits": { 00:37:55.556 "rw_ios_per_sec": 0, 00:37:55.556 "rw_mbytes_per_sec": 0, 00:37:55.556 "r_mbytes_per_sec": 0, 00:37:55.556 "w_mbytes_per_sec": 0 00:37:55.556 }, 00:37:55.556 "claimed": false, 00:37:55.556 "zoned": false, 00:37:55.556 "supported_io_types": { 00:37:55.557 "read": true, 00:37:55.557 "write": true, 00:37:55.557 "unmap": true, 00:37:55.557 "flush": true, 00:37:55.557 "reset": true, 00:37:55.557 "nvme_admin": true, 00:37:55.557 "nvme_io": true, 00:37:55.557 "nvme_io_md": false, 00:37:55.557 "write_zeroes": true, 00:37:55.557 "zcopy": false, 00:37:55.557 "get_zone_info": false, 00:37:55.557 "zone_management": false, 00:37:55.557 "zone_append": false, 00:37:55.557 "compare": false, 00:37:55.557 "compare_and_write": false, 00:37:55.557 "abort": true, 00:37:55.557 "seek_hole": false, 00:37:55.557 "seek_data": false, 00:37:55.557 "copy": false, 00:37:55.557 "nvme_iov_md": false 00:37:55.557 }, 00:37:55.557 "driver_specific": { 00:37:55.557 "nvme": [ 00:37:55.557 { 00:37:55.557 "pci_address": "0000:d8:00.0", 00:37:55.557 "trid": { 00:37:55.557 "trtype": "PCIe", 00:37:55.557 "traddr": "0000:d8:00.0" 00:37:55.557 }, 00:37:55.557 "ctrlr_data": { 00:37:55.557 "cntlid": 0, 00:37:55.557 "vendor_id": "0x8086", 00:37:55.557 "model_number": "INTEL SSDPE2KX020T8", 00:37:55.557 "serial_number": "BTLJ125505KA2P0BGN", 00:37:55.557 "firmware_revision": "VDV10170", 00:37:55.557 "oacs": { 00:37:55.557 "security": 0, 00:37:55.557 "format": 1, 00:37:55.557 "firmware": 1, 00:37:55.557 "ns_manage": 1 00:37:55.557 }, 00:37:55.557 "multi_ctrlr": false, 00:37:55.557 "ana_reporting": false 00:37:55.557 }, 00:37:55.557 "vs": { 00:37:55.557 "nvme_version": "1.2" 00:37:55.557 }, 00:37:55.557 "ns_data": { 00:37:55.557 "id": 1, 00:37:55.557 "can_share": false 00:37:55.557 } 00:37:55.557 } 00:37:55.557 ], 00:37:55.557 "mp_policy": "active_passive" 00:37:55.557 } 00:37:55.557 } 00:37:55.557 ] 00:37:55.557 11:20:02 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:55.557 11:20:02 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:37:56.932 c0c702de-b516-4990-bd14-cce922e3e271 00:37:56.932 11:20:03 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:37:56.932 f97b308e-228c-4c5a-8f9c-438046eb91d6 00:37:56.932 11:20:03 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:37:56.932 11:20:03 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:37:56.932 11:20:03 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:56.932 11:20:03 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:56.932 11:20:03 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:56.932 11:20:03 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:56.932 11:20:03 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:57.190 11:20:04 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:37:57.449 [ 00:37:57.449 { 00:37:57.450 "name": "f97b308e-228c-4c5a-8f9c-438046eb91d6", 00:37:57.450 "aliases": [ 00:37:57.450 "lvs0/lv0" 00:37:57.450 ], 00:37:57.450 "product_name": "Logical Volume", 00:37:57.450 "block_size": 512, 00:37:57.450 "num_blocks": 204800, 00:37:57.450 "uuid": "f97b308e-228c-4c5a-8f9c-438046eb91d6", 00:37:57.450 "assigned_rate_limits": { 00:37:57.450 "rw_ios_per_sec": 0, 00:37:57.450 "rw_mbytes_per_sec": 0, 00:37:57.450 "r_mbytes_per_sec": 0, 00:37:57.450 "w_mbytes_per_sec": 0 00:37:57.450 }, 00:37:57.450 "claimed": false, 00:37:57.450 "zoned": false, 00:37:57.450 "supported_io_types": { 00:37:57.450 "read": true, 00:37:57.450 "write": true, 00:37:57.450 "unmap": true, 00:37:57.450 "flush": false, 00:37:57.450 "reset": true, 00:37:57.450 "nvme_admin": false, 00:37:57.450 "nvme_io": false, 00:37:57.450 "nvme_io_md": false, 00:37:57.450 "write_zeroes": true, 00:37:57.450 "zcopy": false, 00:37:57.450 "get_zone_info": false, 00:37:57.450 "zone_management": false, 00:37:57.450 "zone_append": false, 00:37:57.450 "compare": false, 00:37:57.450 "compare_and_write": false, 00:37:57.450 "abort": false, 00:37:57.450 "seek_hole": true, 00:37:57.450 "seek_data": true, 00:37:57.450 "copy": false, 00:37:57.450 "nvme_iov_md": false 00:37:57.450 }, 00:37:57.450 "driver_specific": { 00:37:57.450 "lvol": { 00:37:57.450 "lvol_store_uuid": "c0c702de-b516-4990-bd14-cce922e3e271", 00:37:57.450 "base_bdev": "Nvme0n1", 00:37:57.450 "thin_provision": true, 00:37:57.450 "num_allocated_clusters": 0, 00:37:57.450 "snapshot": false, 00:37:57.450 "clone": false, 00:37:57.450 "esnap_clone": false 00:37:57.450 } 00:37:57.450 } 00:37:57.450 } 00:37:57.450 ] 00:37:57.450 11:20:04 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:57.450 11:20:04 compress_isal -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:37:57.450 11:20:04 compress_isal -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:37:57.709 [2024-07-25 11:20:04.654483] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:37:57.709 COMP_lvs0/lv0 00:37:57.709 11:20:04 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:37:57.709 11:20:04 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:37:57.709 11:20:04 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:37:57.709 11:20:04 compress_isal -- common/autotest_common.sh@901 -- # local i 00:37:57.709 11:20:04 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:37:57.709 11:20:04 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:37:57.709 11:20:04 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:57.967 11:20:04 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:37:58.227 [ 00:37:58.227 { 00:37:58.227 "name": "COMP_lvs0/lv0", 00:37:58.227 "aliases": [ 00:37:58.227 "dafd1abc-4a62-5814-908b-fec587fde427" 00:37:58.227 ], 00:37:58.227 "product_name": "compress", 00:37:58.227 "block_size": 4096, 00:37:58.227 "num_blocks": 25088, 00:37:58.227 "uuid": "dafd1abc-4a62-5814-908b-fec587fde427", 00:37:58.227 "assigned_rate_limits": { 00:37:58.227 "rw_ios_per_sec": 0, 00:37:58.227 "rw_mbytes_per_sec": 0, 00:37:58.227 "r_mbytes_per_sec": 0, 00:37:58.227 "w_mbytes_per_sec": 0 00:37:58.227 }, 00:37:58.227 "claimed": false, 00:37:58.227 "zoned": false, 00:37:58.227 "supported_io_types": { 00:37:58.227 "read": true, 00:37:58.227 "write": true, 00:37:58.227 "unmap": false, 00:37:58.227 "flush": false, 00:37:58.227 "reset": false, 00:37:58.227 "nvme_admin": false, 00:37:58.227 "nvme_io": false, 00:37:58.227 "nvme_io_md": false, 00:37:58.227 "write_zeroes": true, 00:37:58.227 "zcopy": false, 00:37:58.227 "get_zone_info": false, 00:37:58.227 "zone_management": false, 00:37:58.227 "zone_append": false, 00:37:58.227 "compare": false, 00:37:58.227 "compare_and_write": false, 00:37:58.227 "abort": false, 00:37:58.227 "seek_hole": false, 00:37:58.227 "seek_data": false, 00:37:58.227 "copy": false, 00:37:58.227 "nvme_iov_md": false 00:37:58.227 }, 00:37:58.227 "driver_specific": { 00:37:58.227 "compress": { 00:37:58.227 "name": "COMP_lvs0/lv0", 00:37:58.227 "base_bdev_name": "f97b308e-228c-4c5a-8f9c-438046eb91d6", 00:37:58.227 "pm_path": "/tmp/pmem/52201f78-ec6f-415d-83ee-f95fc6ac91d0" 00:37:58.227 } 00:37:58.227 } 00:37:58.227 } 00:37:58.227 ] 00:37:58.227 11:20:05 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:37:58.227 11:20:05 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:37:58.227 Running I/O for 3 seconds... 00:38:01.571 00:38:01.571 Latency(us) 00:38:01.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.571 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:38:01.571 Verification LBA range: start 0x0 length 0x3100 00:38:01.571 COMP_lvs0/lv0 : 3.01 3232.32 12.63 0.00 0.00 9837.96 63.90 16148.07 00:38:01.571 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:38:01.571 Verification LBA range: start 0x3100 length 0x3100 00:38:01.571 COMP_lvs0/lv0 : 3.01 3268.53 12.77 0.00 0.00 9735.37 63.08 15728.64 00:38:01.571 =================================================================================================================== 00:38:01.571 Total : 6500.85 25.39 0.00 0.00 9786.39 63.08 16148.07 00:38:01.571 0 00:38:01.571 11:20:08 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:38:01.571 11:20:08 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:38:01.571 11:20:08 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:38:01.830 11:20:08 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:38:01.830 11:20:08 compress_isal -- compress/compress.sh@78 -- # killprocess 3807162 00:38:01.830 11:20:08 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 3807162 ']' 00:38:01.830 11:20:08 compress_isal -- common/autotest_common.sh@954 -- # kill -0 3807162 00:38:01.830 11:20:08 compress_isal -- common/autotest_common.sh@955 -- # uname 00:38:01.830 11:20:08 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:01.830 11:20:08 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3807162 00:38:01.830 11:20:08 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:01.830 11:20:08 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:01.830 11:20:08 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3807162' 00:38:01.830 killing process with pid 3807162 00:38:01.830 11:20:08 compress_isal -- common/autotest_common.sh@969 -- # kill 3807162 00:38:01.830 Received shutdown signal, test time was about 3.000000 seconds 00:38:01.830 00:38:01.830 Latency(us) 00:38:01.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:01.830 =================================================================================================================== 00:38:01.830 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:01.831 11:20:08 compress_isal -- common/autotest_common.sh@974 -- # wait 3807162 00:38:06.026 11:20:12 compress_isal -- compress/compress.sh@89 -- # run_bdevio 00:38:06.026 11:20:12 compress_isal -- compress/compress.sh@50 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:38:06.026 11:20:12 compress_isal -- compress/compress.sh@55 -- # bdevio_pid=3809567 00:38:06.026 11:20:12 compress_isal -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:06.026 11:20:12 compress_isal -- compress/compress.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w 00:38:06.026 11:20:12 compress_isal -- compress/compress.sh@57 -- # waitforlisten 3809567 00:38:06.026 11:20:12 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 3809567 ']' 00:38:06.026 11:20:12 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:06.026 11:20:12 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:06.026 11:20:12 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:06.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:06.026 11:20:12 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:06.026 11:20:12 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:38:06.026 [2024-07-25 11:20:13.015992] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:06.026 [2024-07-25 11:20:13.016087] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3809567 ] 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:06.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:06.026 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:06.286 [2024-07-25 11:20:13.213160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:06.547 [2024-07-25 11:20:13.495173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:06.547 [2024-07-25 11:20:13.495262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:38:06.547 [2024-07-25 11:20:13.495270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:07.116 11:20:14 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:07.116 11:20:14 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:38:07.116 11:20:14 compress_isal -- compress/compress.sh@58 -- # create_vols 00:38:07.116 11:20:14 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:07.116 11:20:14 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:38:10.406 11:20:17 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:38:10.406 11:20:17 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:38:10.406 11:20:17 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:10.406 11:20:17 compress_isal -- common/autotest_common.sh@901 -- # local i 00:38:10.406 11:20:17 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:10.406 11:20:17 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:10.406 11:20:17 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:10.406 11:20:17 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:38:10.406 [ 00:38:10.406 { 00:38:10.406 "name": "Nvme0n1", 00:38:10.406 "aliases": [ 00:38:10.406 "4db649e7-8c21-4027-bac7-0e8c4a25a016" 00:38:10.406 ], 00:38:10.406 "product_name": "NVMe disk", 00:38:10.406 "block_size": 512, 00:38:10.406 "num_blocks": 3907029168, 00:38:10.406 "uuid": "4db649e7-8c21-4027-bac7-0e8c4a25a016", 00:38:10.406 "assigned_rate_limits": { 00:38:10.406 "rw_ios_per_sec": 0, 00:38:10.406 "rw_mbytes_per_sec": 0, 00:38:10.406 "r_mbytes_per_sec": 0, 00:38:10.406 "w_mbytes_per_sec": 0 00:38:10.406 }, 00:38:10.406 "claimed": false, 00:38:10.406 "zoned": false, 00:38:10.406 "supported_io_types": { 00:38:10.406 "read": true, 00:38:10.406 "write": true, 00:38:10.406 "unmap": true, 00:38:10.406 "flush": true, 00:38:10.406 "reset": true, 00:38:10.406 "nvme_admin": true, 00:38:10.406 "nvme_io": true, 00:38:10.406 "nvme_io_md": false, 00:38:10.406 "write_zeroes": true, 00:38:10.406 "zcopy": false, 00:38:10.406 "get_zone_info": false, 00:38:10.406 "zone_management": false, 00:38:10.406 "zone_append": false, 00:38:10.406 "compare": false, 00:38:10.406 "compare_and_write": false, 00:38:10.406 "abort": true, 00:38:10.406 "seek_hole": false, 00:38:10.406 "seek_data": false, 00:38:10.406 "copy": false, 00:38:10.406 "nvme_iov_md": false 00:38:10.406 }, 00:38:10.406 "driver_specific": { 00:38:10.406 "nvme": [ 00:38:10.406 { 00:38:10.406 "pci_address": "0000:d8:00.0", 00:38:10.406 "trid": { 00:38:10.406 "trtype": "PCIe", 00:38:10.406 "traddr": "0000:d8:00.0" 00:38:10.406 }, 00:38:10.406 "ctrlr_data": { 00:38:10.406 "cntlid": 0, 00:38:10.406 "vendor_id": "0x8086", 00:38:10.406 "model_number": "INTEL SSDPE2KX020T8", 00:38:10.406 "serial_number": "BTLJ125505KA2P0BGN", 00:38:10.406 "firmware_revision": "VDV10170", 00:38:10.406 "oacs": { 00:38:10.406 "security": 0, 00:38:10.406 "format": 1, 00:38:10.406 "firmware": 1, 00:38:10.406 "ns_manage": 1 00:38:10.406 }, 00:38:10.406 "multi_ctrlr": false, 00:38:10.406 "ana_reporting": false 00:38:10.406 }, 00:38:10.406 "vs": { 00:38:10.406 "nvme_version": "1.2" 00:38:10.406 }, 00:38:10.406 "ns_data": { 00:38:10.406 "id": 1, 00:38:10.406 "can_share": false 00:38:10.406 } 00:38:10.406 } 00:38:10.406 ], 00:38:10.406 "mp_policy": "active_passive" 00:38:10.406 } 00:38:10.406 } 00:38:10.406 ] 00:38:10.406 11:20:17 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:38:10.406 11:20:17 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:38:11.783 0cbc11c2-688e-4db1-b0a4-1b50a78320ca 00:38:11.783 11:20:18 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:38:11.783 c6434f8c-2b5f-4a1f-92d5-7dbc3bef7d5d 00:38:11.783 11:20:18 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:38:11.783 11:20:18 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:38:11.783 11:20:18 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:11.783 11:20:18 compress_isal -- common/autotest_common.sh@901 -- # local i 00:38:11.783 11:20:18 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:11.783 11:20:18 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:11.783 11:20:18 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:12.043 11:20:19 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:38:12.302 [ 00:38:12.302 { 00:38:12.302 "name": "c6434f8c-2b5f-4a1f-92d5-7dbc3bef7d5d", 00:38:12.302 "aliases": [ 00:38:12.302 "lvs0/lv0" 00:38:12.302 ], 00:38:12.302 "product_name": "Logical Volume", 00:38:12.302 "block_size": 512, 00:38:12.302 "num_blocks": 204800, 00:38:12.302 "uuid": "c6434f8c-2b5f-4a1f-92d5-7dbc3bef7d5d", 00:38:12.302 "assigned_rate_limits": { 00:38:12.302 "rw_ios_per_sec": 0, 00:38:12.302 "rw_mbytes_per_sec": 0, 00:38:12.302 "r_mbytes_per_sec": 0, 00:38:12.302 "w_mbytes_per_sec": 0 00:38:12.302 }, 00:38:12.302 "claimed": false, 00:38:12.302 "zoned": false, 00:38:12.302 "supported_io_types": { 00:38:12.302 "read": true, 00:38:12.302 "write": true, 00:38:12.302 "unmap": true, 00:38:12.302 "flush": false, 00:38:12.302 "reset": true, 00:38:12.302 "nvme_admin": false, 00:38:12.302 "nvme_io": false, 00:38:12.302 "nvme_io_md": false, 00:38:12.302 "write_zeroes": true, 00:38:12.302 "zcopy": false, 00:38:12.302 "get_zone_info": false, 00:38:12.302 "zone_management": false, 00:38:12.302 "zone_append": false, 00:38:12.302 "compare": false, 00:38:12.302 "compare_and_write": false, 00:38:12.302 "abort": false, 00:38:12.302 "seek_hole": true, 00:38:12.302 "seek_data": true, 00:38:12.302 "copy": false, 00:38:12.302 "nvme_iov_md": false 00:38:12.302 }, 00:38:12.302 "driver_specific": { 00:38:12.302 "lvol": { 00:38:12.302 "lvol_store_uuid": "0cbc11c2-688e-4db1-b0a4-1b50a78320ca", 00:38:12.302 "base_bdev": "Nvme0n1", 00:38:12.302 "thin_provision": true, 00:38:12.302 "num_allocated_clusters": 0, 00:38:12.302 "snapshot": false, 00:38:12.302 "clone": false, 00:38:12.302 "esnap_clone": false 00:38:12.302 } 00:38:12.302 } 00:38:12.302 } 00:38:12.302 ] 00:38:12.302 11:20:19 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:38:12.302 11:20:19 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:38:12.302 11:20:19 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:38:12.302 [2024-07-25 11:20:19.335077] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:38:12.302 COMP_lvs0/lv0 00:38:12.302 11:20:19 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:38:12.302 11:20:19 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:38:12.302 11:20:19 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:12.302 11:20:19 compress_isal -- common/autotest_common.sh@901 -- # local i 00:38:12.302 11:20:19 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:12.302 11:20:19 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:12.302 11:20:19 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:12.562 11:20:19 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:38:12.562 [ 00:38:12.562 { 00:38:12.562 "name": "COMP_lvs0/lv0", 00:38:12.562 "aliases": [ 00:38:12.562 "8f3dad4d-ff0b-5b4e-8552-a4954bf9e31b" 00:38:12.562 ], 00:38:12.562 "product_name": "compress", 00:38:12.562 "block_size": 512, 00:38:12.562 "num_blocks": 200704, 00:38:12.562 "uuid": "8f3dad4d-ff0b-5b4e-8552-a4954bf9e31b", 00:38:12.562 "assigned_rate_limits": { 00:38:12.562 "rw_ios_per_sec": 0, 00:38:12.562 "rw_mbytes_per_sec": 0, 00:38:12.562 "r_mbytes_per_sec": 0, 00:38:12.562 "w_mbytes_per_sec": 0 00:38:12.562 }, 00:38:12.562 "claimed": false, 00:38:12.562 "zoned": false, 00:38:12.562 "supported_io_types": { 00:38:12.562 "read": true, 00:38:12.562 "write": true, 00:38:12.562 "unmap": false, 00:38:12.562 "flush": false, 00:38:12.562 "reset": false, 00:38:12.562 "nvme_admin": false, 00:38:12.562 "nvme_io": false, 00:38:12.562 "nvme_io_md": false, 00:38:12.562 "write_zeroes": true, 00:38:12.562 "zcopy": false, 00:38:12.562 "get_zone_info": false, 00:38:12.562 "zone_management": false, 00:38:12.562 "zone_append": false, 00:38:12.562 "compare": false, 00:38:12.562 "compare_and_write": false, 00:38:12.562 "abort": false, 00:38:12.562 "seek_hole": false, 00:38:12.562 "seek_data": false, 00:38:12.562 "copy": false, 00:38:12.562 "nvme_iov_md": false 00:38:12.562 }, 00:38:12.562 "driver_specific": { 00:38:12.562 "compress": { 00:38:12.562 "name": "COMP_lvs0/lv0", 00:38:12.562 "base_bdev_name": "c6434f8c-2b5f-4a1f-92d5-7dbc3bef7d5d", 00:38:12.562 "pm_path": "/tmp/pmem/20e09643-5ad0-424c-b8b0-ec053397330f" 00:38:12.562 } 00:38:12.562 } 00:38:12.562 } 00:38:12.562 ] 00:38:12.562 11:20:19 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:38:12.562 11:20:19 compress_isal -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:38:12.821 I/O targets: 00:38:12.821 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:38:12.821 00:38:12.821 00:38:12.821 CUnit - A unit testing framework for C - Version 2.1-3 00:38:12.821 http://cunit.sourceforge.net/ 00:38:12.821 00:38:12.821 00:38:12.821 Suite: bdevio tests on: COMP_lvs0/lv0 00:38:12.821 Test: blockdev write read block ...passed 00:38:12.821 Test: blockdev write zeroes read block ...passed 00:38:12.821 Test: blockdev write zeroes read no split ...passed 00:38:12.821 Test: blockdev write zeroes read split ...passed 00:38:12.821 Test: blockdev write zeroes read split partial ...passed 00:38:12.821 Test: blockdev reset ...[2024-07-25 11:20:19.927220] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:38:12.821 passed 00:38:12.821 Test: blockdev write read 8 blocks ...passed 00:38:12.821 Test: blockdev write read size > 128k ...passed 00:38:12.821 Test: blockdev write read invalid size ...passed 00:38:12.821 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:12.821 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:12.821 Test: blockdev write read max offset ...passed 00:38:13.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:13.080 Test: blockdev writev readv 8 blocks ...passed 00:38:13.080 Test: blockdev writev readv 30 x 1block ...passed 00:38:13.080 Test: blockdev writev readv block ...passed 00:38:13.080 Test: blockdev writev readv size > 128k ...passed 00:38:13.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:13.080 Test: blockdev comparev and writev ...passed 00:38:13.080 Test: blockdev nvme passthru rw ...passed 00:38:13.080 Test: blockdev nvme passthru vendor specific ...passed 00:38:13.080 Test: blockdev nvme admin passthru ...passed 00:38:13.080 Test: blockdev copy ...passed 00:38:13.080 00:38:13.080 Run Summary: Type Total Ran Passed Failed Inactive 00:38:13.080 suites 1 1 n/a 0 0 00:38:13.080 tests 23 23 23 0 0 00:38:13.080 asserts 130 130 130 0 n/a 00:38:13.080 00:38:13.080 Elapsed time = 0.458 seconds 00:38:13.080 0 00:38:13.080 11:20:19 compress_isal -- compress/compress.sh@60 -- # destroy_vols 00:38:13.080 11:20:19 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:38:13.340 11:20:20 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:38:13.598 11:20:20 compress_isal -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:38:13.598 11:20:20 compress_isal -- compress/compress.sh@62 -- # killprocess 3809567 00:38:13.599 11:20:20 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 3809567 ']' 00:38:13.599 11:20:20 compress_isal -- common/autotest_common.sh@954 -- # kill -0 3809567 00:38:13.599 11:20:20 compress_isal -- common/autotest_common.sh@955 -- # uname 00:38:13.599 11:20:20 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:13.599 11:20:20 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3809567 00:38:13.599 11:20:20 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:38:13.599 11:20:20 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:38:13.599 11:20:20 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3809567' 00:38:13.599 killing process with pid 3809567 00:38:13.599 11:20:20 compress_isal -- common/autotest_common.sh@969 -- # kill 3809567 00:38:13.599 11:20:20 compress_isal -- common/autotest_common.sh@974 -- # wait 3809567 00:38:17.792 11:20:24 compress_isal -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:38:17.792 11:20:24 compress_isal -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:38:17.792 11:20:24 compress_isal -- compress/compress.sh@66 -- # [[ isal == \c\o\m\p\d\e\v ]] 00:38:17.792 11:20:24 compress_isal -- compress/compress.sh@71 -- # bdevperf_pid=3811437 00:38:17.792 11:20:24 compress_isal -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:17.792 11:20:24 compress_isal -- compress/compress.sh@69 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 00:38:17.792 11:20:24 compress_isal -- compress/compress.sh@73 -- # waitforlisten 3811437 00:38:17.792 11:20:24 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 3811437 ']' 00:38:17.792 11:20:24 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:17.792 11:20:24 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:17.792 11:20:24 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:17.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:17.792 11:20:24 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:17.792 11:20:24 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:38:17.792 [2024-07-25 11:20:24.454591] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:17.792 [2024-07-25 11:20:24.454684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811437 ] 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:17.792 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.792 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:17.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.793 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:17.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.793 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:17.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.793 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:17.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.793 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:17.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.793 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:17.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.793 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:17.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.793 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:17.793 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:17.793 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:17.793 [2024-07-25 11:20:24.638801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:18.052 [2024-07-25 11:20:24.918493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:38:18.052 [2024-07-25 11:20:24.918493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:38:18.311 11:20:25 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:38:18.311 11:20:25 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:38:18.311 11:20:25 compress_isal -- compress/compress.sh@74 -- # create_vols 00:38:18.311 11:20:25 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:38:18.311 11:20:25 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:38:21.600 11:20:28 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:38:21.600 11:20:28 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:38:21.600 11:20:28 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:21.600 11:20:28 compress_isal -- common/autotest_common.sh@901 -- # local i 00:38:21.600 11:20:28 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:21.600 11:20:28 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:21.600 11:20:28 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:21.600 11:20:28 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:38:21.928 [ 00:38:21.928 { 00:38:21.928 "name": "Nvme0n1", 00:38:21.928 "aliases": [ 00:38:21.928 "3bb06c1e-7529-4a4e-941f-8828295b4f7f" 00:38:21.928 ], 00:38:21.928 "product_name": "NVMe disk", 00:38:21.928 "block_size": 512, 00:38:21.928 "num_blocks": 3907029168, 00:38:21.928 "uuid": "3bb06c1e-7529-4a4e-941f-8828295b4f7f", 00:38:21.928 "assigned_rate_limits": { 00:38:21.928 "rw_ios_per_sec": 0, 00:38:21.928 "rw_mbytes_per_sec": 0, 00:38:21.928 "r_mbytes_per_sec": 0, 00:38:21.928 "w_mbytes_per_sec": 0 00:38:21.928 }, 00:38:21.928 "claimed": false, 00:38:21.928 "zoned": false, 00:38:21.928 "supported_io_types": { 00:38:21.928 "read": true, 00:38:21.928 "write": true, 00:38:21.928 "unmap": true, 00:38:21.928 "flush": true, 00:38:21.928 "reset": true, 00:38:21.928 "nvme_admin": true, 00:38:21.928 "nvme_io": true, 00:38:21.928 "nvme_io_md": false, 00:38:21.928 "write_zeroes": true, 00:38:21.928 "zcopy": false, 00:38:21.928 "get_zone_info": false, 00:38:21.928 "zone_management": false, 00:38:21.928 "zone_append": false, 00:38:21.928 "compare": false, 00:38:21.928 "compare_and_write": false, 00:38:21.928 "abort": true, 00:38:21.928 "seek_hole": false, 00:38:21.928 "seek_data": false, 00:38:21.928 "copy": false, 00:38:21.928 "nvme_iov_md": false 00:38:21.928 }, 00:38:21.928 "driver_specific": { 00:38:21.928 "nvme": [ 00:38:21.928 { 00:38:21.928 "pci_address": "0000:d8:00.0", 00:38:21.928 "trid": { 00:38:21.928 "trtype": "PCIe", 00:38:21.928 "traddr": "0000:d8:00.0" 00:38:21.928 }, 00:38:21.928 "ctrlr_data": { 00:38:21.928 "cntlid": 0, 00:38:21.928 "vendor_id": "0x8086", 00:38:21.928 "model_number": "INTEL SSDPE2KX020T8", 00:38:21.928 "serial_number": "BTLJ125505KA2P0BGN", 00:38:21.928 "firmware_revision": "VDV10170", 00:38:21.928 "oacs": { 00:38:21.928 "security": 0, 00:38:21.928 "format": 1, 00:38:21.928 "firmware": 1, 00:38:21.928 "ns_manage": 1 00:38:21.928 }, 00:38:21.928 "multi_ctrlr": false, 00:38:21.928 "ana_reporting": false 00:38:21.928 }, 00:38:21.928 "vs": { 00:38:21.928 "nvme_version": "1.2" 00:38:21.928 }, 00:38:21.928 "ns_data": { 00:38:21.928 "id": 1, 00:38:21.928 "can_share": false 00:38:21.928 } 00:38:21.928 } 00:38:21.928 ], 00:38:21.928 "mp_policy": "active_passive" 00:38:21.928 } 00:38:21.928 } 00:38:21.928 ] 00:38:21.928 11:20:28 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:38:21.928 11:20:28 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:38:23.307 6fc0319b-9fec-48d4-b7ee-f358895a70fe 00:38:23.307 11:20:30 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:38:23.307 524c8c06-d05f-4e34-851b-bfc8a967deb9 00:38:23.307 11:20:30 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:38:23.307 11:20:30 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:38:23.307 11:20:30 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:23.307 11:20:30 compress_isal -- common/autotest_common.sh@901 -- # local i 00:38:23.307 11:20:30 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:23.307 11:20:30 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:23.307 11:20:30 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:23.565 11:20:30 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:38:23.565 [ 00:38:23.565 { 00:38:23.565 "name": "524c8c06-d05f-4e34-851b-bfc8a967deb9", 00:38:23.565 "aliases": [ 00:38:23.565 "lvs0/lv0" 00:38:23.565 ], 00:38:23.565 "product_name": "Logical Volume", 00:38:23.565 "block_size": 512, 00:38:23.565 "num_blocks": 204800, 00:38:23.565 "uuid": "524c8c06-d05f-4e34-851b-bfc8a967deb9", 00:38:23.565 "assigned_rate_limits": { 00:38:23.565 "rw_ios_per_sec": 0, 00:38:23.565 "rw_mbytes_per_sec": 0, 00:38:23.565 "r_mbytes_per_sec": 0, 00:38:23.565 "w_mbytes_per_sec": 0 00:38:23.565 }, 00:38:23.565 "claimed": false, 00:38:23.565 "zoned": false, 00:38:23.565 "supported_io_types": { 00:38:23.565 "read": true, 00:38:23.565 "write": true, 00:38:23.565 "unmap": true, 00:38:23.565 "flush": false, 00:38:23.565 "reset": true, 00:38:23.566 "nvme_admin": false, 00:38:23.566 "nvme_io": false, 00:38:23.566 "nvme_io_md": false, 00:38:23.566 "write_zeroes": true, 00:38:23.566 "zcopy": false, 00:38:23.566 "get_zone_info": false, 00:38:23.566 "zone_management": false, 00:38:23.566 "zone_append": false, 00:38:23.566 "compare": false, 00:38:23.566 "compare_and_write": false, 00:38:23.566 "abort": false, 00:38:23.566 "seek_hole": true, 00:38:23.566 "seek_data": true, 00:38:23.566 "copy": false, 00:38:23.566 "nvme_iov_md": false 00:38:23.566 }, 00:38:23.566 "driver_specific": { 00:38:23.566 "lvol": { 00:38:23.566 "lvol_store_uuid": "6fc0319b-9fec-48d4-b7ee-f358895a70fe", 00:38:23.566 "base_bdev": "Nvme0n1", 00:38:23.566 "thin_provision": true, 00:38:23.566 "num_allocated_clusters": 0, 00:38:23.566 "snapshot": false, 00:38:23.566 "clone": false, 00:38:23.566 "esnap_clone": false 00:38:23.566 } 00:38:23.566 } 00:38:23.566 } 00:38:23.566 ] 00:38:23.566 11:20:30 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:38:23.566 11:20:30 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:38:23.566 11:20:30 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:38:23.824 [2024-07-25 11:20:30.799744] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:38:23.824 COMP_lvs0/lv0 00:38:23.824 11:20:30 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:38:23.824 11:20:30 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:38:23.824 11:20:30 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:38:23.824 11:20:30 compress_isal -- common/autotest_common.sh@901 -- # local i 00:38:23.824 11:20:30 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:38:23.824 11:20:30 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:38:23.824 11:20:30 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:24.083 11:20:30 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:38:24.083 [ 00:38:24.083 { 00:38:24.083 "name": "COMP_lvs0/lv0", 00:38:24.083 "aliases": [ 00:38:24.083 "c15cf7b0-f30a-51a6-9ab0-4a46b596e86c" 00:38:24.083 ], 00:38:24.083 "product_name": "compress", 00:38:24.083 "block_size": 512, 00:38:24.083 "num_blocks": 200704, 00:38:24.083 "uuid": "c15cf7b0-f30a-51a6-9ab0-4a46b596e86c", 00:38:24.083 "assigned_rate_limits": { 00:38:24.083 "rw_ios_per_sec": 0, 00:38:24.083 "rw_mbytes_per_sec": 0, 00:38:24.083 "r_mbytes_per_sec": 0, 00:38:24.083 "w_mbytes_per_sec": 0 00:38:24.083 }, 00:38:24.083 "claimed": false, 00:38:24.083 "zoned": false, 00:38:24.083 "supported_io_types": { 00:38:24.083 "read": true, 00:38:24.083 "write": true, 00:38:24.083 "unmap": false, 00:38:24.083 "flush": false, 00:38:24.083 "reset": false, 00:38:24.083 "nvme_admin": false, 00:38:24.083 "nvme_io": false, 00:38:24.083 "nvme_io_md": false, 00:38:24.083 "write_zeroes": true, 00:38:24.083 "zcopy": false, 00:38:24.083 "get_zone_info": false, 00:38:24.083 "zone_management": false, 00:38:24.083 "zone_append": false, 00:38:24.083 "compare": false, 00:38:24.083 "compare_and_write": false, 00:38:24.083 "abort": false, 00:38:24.083 "seek_hole": false, 00:38:24.083 "seek_data": false, 00:38:24.083 "copy": false, 00:38:24.083 "nvme_iov_md": false 00:38:24.083 }, 00:38:24.083 "driver_specific": { 00:38:24.083 "compress": { 00:38:24.083 "name": "COMP_lvs0/lv0", 00:38:24.083 "base_bdev_name": "524c8c06-d05f-4e34-851b-bfc8a967deb9", 00:38:24.083 "pm_path": "/tmp/pmem/0119728f-df8b-4197-a086-4ceffa28e505" 00:38:24.083 } 00:38:24.083 } 00:38:24.083 } 00:38:24.083 ] 00:38:24.083 11:20:31 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:38:24.083 11:20:31 compress_isal -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:38:24.342 Running I/O for 30 seconds... 00:38:56.428 00:38:56.428 Latency(us) 00:38:56.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:56.428 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:38:56.428 Verification LBA range: start 0x0 length 0xc40 00:38:56.428 COMP_lvs0/lv0 : 30.01 1400.71 21.89 0.00 0.00 45467.88 265.42 38377.88 00:38:56.428 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:38:56.428 Verification LBA range: start 0xc40 length 0xc40 00:38:56.428 COMP_lvs0/lv0 : 30.00 4632.90 72.39 0.00 0.00 13704.76 417.79 28311.55 00:38:56.428 =================================================================================================================== 00:38:56.428 Total : 6033.62 94.28 0.00 0.00 21079.29 265.42 38377.88 00:38:56.428 0 00:38:56.428 11:21:01 compress_isal -- compress/compress.sh@76 -- # destroy_vols 00:38:56.428 11:21:01 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:38:56.428 11:21:01 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:38:56.428 11:21:01 compress_isal -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:38:56.428 11:21:01 compress_isal -- compress/compress.sh@78 -- # killprocess 3811437 00:38:56.428 11:21:01 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 3811437 ']' 00:38:56.428 11:21:01 compress_isal -- common/autotest_common.sh@954 -- # kill -0 3811437 00:38:56.428 11:21:01 compress_isal -- common/autotest_common.sh@955 -- # uname 00:38:56.428 11:21:01 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:38:56.428 11:21:01 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3811437 00:38:56.428 11:21:01 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:38:56.428 11:21:01 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:38:56.428 11:21:01 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3811437' 00:38:56.428 killing process with pid 3811437 00:38:56.428 11:21:01 compress_isal -- common/autotest_common.sh@969 -- # kill 3811437 00:38:56.428 Received shutdown signal, test time was about 30.000000 seconds 00:38:56.428 00:38:56.428 Latency(us) 00:38:56.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:56.428 =================================================================================================================== 00:38:56.428 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:56.428 11:21:01 compress_isal -- common/autotest_common.sh@974 -- # wait 3811437 00:38:58.969 11:21:05 compress_isal -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:38:58.969 11:21:05 compress_isal -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:38:58.969 11:21:05 compress_isal -- compress/compress.sh@96 -- # NET_TYPE=virt 00:38:58.969 11:21:05 compress_isal -- compress/compress.sh@96 -- # nvmftestinit 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@448 -- # prepare_net_devs 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.969 11:21:05 compress_isal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:58.969 11:21:05 compress_isal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@432 -- # nvmf_veth_init 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:38:58.969 11:21:05 compress_isal -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:38:58.970 Cannot find device "nvmf_tgt_br" 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@155 -- # true 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:38:58.970 Cannot find device "nvmf_tgt_br2" 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@156 -- # true 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:38:58.970 Cannot find device "nvmf_tgt_br" 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@158 -- # true 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:38:58.970 Cannot find device "nvmf_tgt_br2" 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@159 -- # true 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:38:58.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@162 -- # true 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:38:58.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@163 -- # true 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:38:58.970 11:21:05 compress_isal -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:38:58.970 11:21:06 compress_isal -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:38:58.970 11:21:06 compress_isal -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:38:58.970 11:21:06 compress_isal -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:38:58.970 11:21:06 compress_isal -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:38:58.970 11:21:06 compress_isal -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:38:58.970 11:21:06 compress_isal -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:38:58.970 11:21:06 compress_isal -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:38:58.970 11:21:06 compress_isal -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:38:58.970 11:21:06 compress_isal -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:38:59.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:59.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:38:59.229 00:38:59.229 --- 10.0.0.2 ping statistics --- 00:38:59.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.229 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:38:59.229 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:38:59.229 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:38:59.229 00:38:59.229 --- 10.0.0.3 ping statistics --- 00:38:59.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.229 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:38:59.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:59.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:38:59.229 00:38:59.229 --- 10.0.0.1 ping statistics --- 00:38:59.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:59.229 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@433 -- # return 0 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:38:59.229 11:21:06 compress_isal -- compress/compress.sh@97 -- # nvmfappstart -m 0x7 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:38:59.229 11:21:06 compress_isal -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:59.229 11:21:06 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@481 -- # nvmfpid=3819057 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@482 -- # waitforlisten 3819057 00:38:59.229 11:21:06 compress_isal -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:38:59.229 11:21:06 compress_isal -- common/autotest_common.sh@831 -- # '[' -z 3819057 ']' 00:38:59.229 11:21:06 compress_isal -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:59.230 11:21:06 compress_isal -- common/autotest_common.sh@836 -- # local max_retries=100 00:38:59.230 11:21:06 compress_isal -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:59.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:59.230 11:21:06 compress_isal -- common/autotest_common.sh@840 -- # xtrace_disable 00:38:59.230 11:21:06 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:38:59.488 [2024-07-25 11:21:06.420006] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:38:59.488 [2024-07-25 11:21:06.420122] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:59.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.488 EAL: Requested device 0000:3d:01.0 cannot be used 00:38:59.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.488 EAL: Requested device 0000:3d:01.1 cannot be used 00:38:59.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.488 EAL: Requested device 0000:3d:01.2 cannot be used 00:38:59.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.488 EAL: Requested device 0000:3d:01.3 cannot be used 00:38:59.488 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3d:01.4 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3d:01.5 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3d:01.6 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3d:01.7 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3d:02.0 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3d:02.1 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3d:02.2 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3d:02.3 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3d:02.4 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3d:02.5 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3d:02.6 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3d:02.7 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:01.0 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:01.1 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:01.2 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:01.3 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:01.4 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:01.5 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:01.6 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:01.7 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:02.0 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:02.1 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:02.2 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:02.3 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:02.4 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:02.5 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:02.6 cannot be used 00:38:59.489 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:38:59.489 EAL: Requested device 0000:3f:02.7 cannot be used 00:38:59.747 [2024-07-25 11:21:06.657451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:00.006 [2024-07-25 11:21:06.929030] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:00.006 [2024-07-25 11:21:06.929081] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:00.006 [2024-07-25 11:21:06.929101] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:00.006 [2024-07-25 11:21:06.929117] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:00.006 [2024-07-25 11:21:06.929133] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:00.006 [2024-07-25 11:21:06.929234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:00.006 [2024-07-25 11:21:06.929304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.006 [2024-07-25 11:21:06.929309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:00.573 11:21:07 compress_isal -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:00.573 11:21:07 compress_isal -- common/autotest_common.sh@864 -- # return 0 00:39:00.573 11:21:07 compress_isal -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:39:00.573 11:21:07 compress_isal -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:00.573 11:21:07 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:39:00.573 11:21:07 compress_isal -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:00.573 11:21:07 compress_isal -- compress/compress.sh@98 -- # trap 'nvmftestfini; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:00.573 11:21:07 compress_isal -- compress/compress.sh@101 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -u 8192 00:39:00.832 [2024-07-25 11:21:07.735648] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:00.832 11:21:07 compress_isal -- compress/compress.sh@102 -- # create_vols 00:39:00.832 11:21:07 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:39:00.832 11:21:07 compress_isal -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:39:04.117 11:21:10 compress_isal -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:39:04.117 11:21:10 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=Nvme0n1 00:39:04.117 11:21:10 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:39:04.117 11:21:10 compress_isal -- common/autotest_common.sh@901 -- # local i 00:39:04.117 11:21:10 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:39:04.117 11:21:10 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:39:04.117 11:21:10 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:04.117 11:21:11 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:39:04.376 [ 00:39:04.376 { 00:39:04.376 "name": "Nvme0n1", 00:39:04.376 "aliases": [ 00:39:04.376 "39775917-e0ef-4ade-b3d2-ed211857803c" 00:39:04.376 ], 00:39:04.376 "product_name": "NVMe disk", 00:39:04.376 "block_size": 512, 00:39:04.376 "num_blocks": 3907029168, 00:39:04.376 "uuid": "39775917-e0ef-4ade-b3d2-ed211857803c", 00:39:04.376 "assigned_rate_limits": { 00:39:04.376 "rw_ios_per_sec": 0, 00:39:04.376 "rw_mbytes_per_sec": 0, 00:39:04.376 "r_mbytes_per_sec": 0, 00:39:04.376 "w_mbytes_per_sec": 0 00:39:04.376 }, 00:39:04.376 "claimed": false, 00:39:04.376 "zoned": false, 00:39:04.376 "supported_io_types": { 00:39:04.376 "read": true, 00:39:04.376 "write": true, 00:39:04.376 "unmap": true, 00:39:04.376 "flush": true, 00:39:04.376 "reset": true, 00:39:04.376 "nvme_admin": true, 00:39:04.376 "nvme_io": true, 00:39:04.376 "nvme_io_md": false, 00:39:04.376 "write_zeroes": true, 00:39:04.376 "zcopy": false, 00:39:04.376 "get_zone_info": false, 00:39:04.376 "zone_management": false, 00:39:04.376 "zone_append": false, 00:39:04.376 "compare": false, 00:39:04.376 "compare_and_write": false, 00:39:04.376 "abort": true, 00:39:04.376 "seek_hole": false, 00:39:04.376 "seek_data": false, 00:39:04.376 "copy": false, 00:39:04.376 "nvme_iov_md": false 00:39:04.376 }, 00:39:04.376 "driver_specific": { 00:39:04.376 "nvme": [ 00:39:04.376 { 00:39:04.376 "pci_address": "0000:d8:00.0", 00:39:04.376 "trid": { 00:39:04.376 "trtype": "PCIe", 00:39:04.376 "traddr": "0000:d8:00.0" 00:39:04.376 }, 00:39:04.376 "ctrlr_data": { 00:39:04.376 "cntlid": 0, 00:39:04.376 "vendor_id": "0x8086", 00:39:04.376 "model_number": "INTEL SSDPE2KX020T8", 00:39:04.376 "serial_number": "BTLJ125505KA2P0BGN", 00:39:04.376 "firmware_revision": "VDV10170", 00:39:04.376 "oacs": { 00:39:04.376 "security": 0, 00:39:04.376 "format": 1, 00:39:04.376 "firmware": 1, 00:39:04.376 "ns_manage": 1 00:39:04.376 }, 00:39:04.376 "multi_ctrlr": false, 00:39:04.376 "ana_reporting": false 00:39:04.376 }, 00:39:04.376 "vs": { 00:39:04.376 "nvme_version": "1.2" 00:39:04.376 }, 00:39:04.376 "ns_data": { 00:39:04.376 "id": 1, 00:39:04.376 "can_share": false 00:39:04.376 } 00:39:04.376 } 00:39:04.376 ], 00:39:04.376 "mp_policy": "active_passive" 00:39:04.376 } 00:39:04.376 } 00:39:04.376 ] 00:39:04.376 11:21:11 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:39:04.376 11:21:11 compress_isal -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:39:05.792 88be47aa-e16d-4446-89ce-0047395c497c 00:39:05.792 11:21:12 compress_isal -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:39:05.792 4501be51-d142-424b-8a84-00748791f950 00:39:05.792 11:21:12 compress_isal -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:39:05.792 11:21:12 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=lvs0/lv0 00:39:05.792 11:21:12 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:39:05.792 11:21:12 compress_isal -- common/autotest_common.sh@901 -- # local i 00:39:05.792 11:21:12 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:39:05.792 11:21:12 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:39:05.792 11:21:12 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:06.051 11:21:13 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:39:06.311 [ 00:39:06.311 { 00:39:06.311 "name": "4501be51-d142-424b-8a84-00748791f950", 00:39:06.311 "aliases": [ 00:39:06.311 "lvs0/lv0" 00:39:06.311 ], 00:39:06.311 "product_name": "Logical Volume", 00:39:06.311 "block_size": 512, 00:39:06.311 "num_blocks": 204800, 00:39:06.311 "uuid": "4501be51-d142-424b-8a84-00748791f950", 00:39:06.311 "assigned_rate_limits": { 00:39:06.311 "rw_ios_per_sec": 0, 00:39:06.311 "rw_mbytes_per_sec": 0, 00:39:06.311 "r_mbytes_per_sec": 0, 00:39:06.311 "w_mbytes_per_sec": 0 00:39:06.311 }, 00:39:06.311 "claimed": false, 00:39:06.311 "zoned": false, 00:39:06.311 "supported_io_types": { 00:39:06.311 "read": true, 00:39:06.311 "write": true, 00:39:06.311 "unmap": true, 00:39:06.311 "flush": false, 00:39:06.311 "reset": true, 00:39:06.311 "nvme_admin": false, 00:39:06.311 "nvme_io": false, 00:39:06.311 "nvme_io_md": false, 00:39:06.311 "write_zeroes": true, 00:39:06.311 "zcopy": false, 00:39:06.311 "get_zone_info": false, 00:39:06.311 "zone_management": false, 00:39:06.311 "zone_append": false, 00:39:06.311 "compare": false, 00:39:06.311 "compare_and_write": false, 00:39:06.311 "abort": false, 00:39:06.311 "seek_hole": true, 00:39:06.311 "seek_data": true, 00:39:06.311 "copy": false, 00:39:06.311 "nvme_iov_md": false 00:39:06.311 }, 00:39:06.311 "driver_specific": { 00:39:06.311 "lvol": { 00:39:06.311 "lvol_store_uuid": "88be47aa-e16d-4446-89ce-0047395c497c", 00:39:06.311 "base_bdev": "Nvme0n1", 00:39:06.311 "thin_provision": true, 00:39:06.311 "num_allocated_clusters": 0, 00:39:06.311 "snapshot": false, 00:39:06.311 "clone": false, 00:39:06.311 "esnap_clone": false 00:39:06.311 } 00:39:06.311 } 00:39:06.311 } 00:39:06.311 ] 00:39:06.311 11:21:13 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:39:06.311 11:21:13 compress_isal -- compress/compress.sh@41 -- # '[' -z '' ']' 00:39:06.311 11:21:13 compress_isal -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:39:06.568 [2024-07-25 11:21:13.470478] vbdev_compress.c:1008:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:39:06.568 COMP_lvs0/lv0 00:39:06.568 11:21:13 compress_isal -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:39:06.568 11:21:13 compress_isal -- common/autotest_common.sh@899 -- # local bdev_name=COMP_lvs0/lv0 00:39:06.568 11:21:13 compress_isal -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:39:06.568 11:21:13 compress_isal -- common/autotest_common.sh@901 -- # local i 00:39:06.568 11:21:13 compress_isal -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:39:06.568 11:21:13 compress_isal -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:39:06.568 11:21:13 compress_isal -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:06.568 11:21:13 compress_isal -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:39:06.826 [ 00:39:06.826 { 00:39:06.826 "name": "COMP_lvs0/lv0", 00:39:06.826 "aliases": [ 00:39:06.826 "cdf7b5b5-9d87-56c9-8092-8535256f9d13" 00:39:06.826 ], 00:39:06.826 "product_name": "compress", 00:39:06.826 "block_size": 512, 00:39:06.826 "num_blocks": 200704, 00:39:06.826 "uuid": "cdf7b5b5-9d87-56c9-8092-8535256f9d13", 00:39:06.826 "assigned_rate_limits": { 00:39:06.826 "rw_ios_per_sec": 0, 00:39:06.826 "rw_mbytes_per_sec": 0, 00:39:06.826 "r_mbytes_per_sec": 0, 00:39:06.826 "w_mbytes_per_sec": 0 00:39:06.826 }, 00:39:06.826 "claimed": false, 00:39:06.826 "zoned": false, 00:39:06.826 "supported_io_types": { 00:39:06.826 "read": true, 00:39:06.826 "write": true, 00:39:06.826 "unmap": false, 00:39:06.826 "flush": false, 00:39:06.827 "reset": false, 00:39:06.827 "nvme_admin": false, 00:39:06.827 "nvme_io": false, 00:39:06.827 "nvme_io_md": false, 00:39:06.827 "write_zeroes": true, 00:39:06.827 "zcopy": false, 00:39:06.827 "get_zone_info": false, 00:39:06.827 "zone_management": false, 00:39:06.827 "zone_append": false, 00:39:06.827 "compare": false, 00:39:06.827 "compare_and_write": false, 00:39:06.827 "abort": false, 00:39:06.827 "seek_hole": false, 00:39:06.827 "seek_data": false, 00:39:06.827 "copy": false, 00:39:06.827 "nvme_iov_md": false 00:39:06.827 }, 00:39:06.827 "driver_specific": { 00:39:06.827 "compress": { 00:39:06.827 "name": "COMP_lvs0/lv0", 00:39:06.827 "base_bdev_name": "4501be51-d142-424b-8a84-00748791f950", 00:39:06.827 "pm_path": "/tmp/pmem/8a77e2ae-a09d-4384-b66e-51fd4a029626" 00:39:06.827 } 00:39:06.827 } 00:39:06.827 } 00:39:06.827 ] 00:39:06.827 11:21:13 compress_isal -- common/autotest_common.sh@907 -- # return 0 00:39:06.827 11:21:13 compress_isal -- compress/compress.sh@103 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:07.085 11:21:13 compress_isal -- compress/compress.sh@104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 COMP_lvs0/lv0 00:39:07.085 11:21:14 compress_isal -- compress/compress.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:07.343 [2024-07-25 11:21:14.370492] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:07.343 11:21:14 compress_isal -- compress/compress.sh@109 -- # perf_pid=3820391 00:39:07.343 11:21:14 compress_isal -- compress/compress.sh@112 -- # trap 'killprocess $perf_pid; compress_err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:39:07.343 11:21:14 compress_isal -- compress/compress.sh@108 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 64 -s 512 -w randrw -t 30 -c 0x18 -M 50 00:39:07.343 11:21:14 compress_isal -- compress/compress.sh@113 -- # wait 3820391 00:39:07.601 [2024-07-25 11:21:14.635789] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:39.700 Initializing NVMe Controllers 00:39:39.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:39:39.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:39:39.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:39:39.700 Initialization complete. Launching workers. 00:39:39.700 ======================================================== 00:39:39.700 Latency(us) 00:39:39.700 Device Information : IOPS MiB/s Average min max 00:39:39.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 4505.93 17.60 14205.63 1790.89 34370.41 00:39:39.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 2818.80 11.01 22709.40 3577.81 42517.28 00:39:39.700 ======================================================== 00:39:39.700 Total : 7324.73 28.61 17478.16 1790.89 42517.28 00:39:39.700 00:39:39.700 11:21:44 compress_isal -- compress/compress.sh@114 -- # destroy_vols 00:39:39.700 11:21:44 compress_isal -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:39:39.700 11:21:45 compress_isal -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:39:39.700 11:21:45 compress_isal -- compress/compress.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:39:39.700 11:21:45 compress_isal -- compress/compress.sh@117 -- # nvmftestfini 00:39:39.700 11:21:45 compress_isal -- nvmf/common.sh@488 -- # nvmfcleanup 00:39:39.700 11:21:45 compress_isal -- nvmf/common.sh@117 -- # sync 00:39:39.700 11:21:45 compress_isal -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:39:39.700 11:21:45 compress_isal -- nvmf/common.sh@120 -- # set +e 00:39:39.700 11:21:45 compress_isal -- nvmf/common.sh@121 -- # for i in {1..20} 00:39:39.700 11:21:45 compress_isal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:39:39.700 rmmod nvme_tcp 00:39:39.700 rmmod nvme_fabrics 00:39:39.700 rmmod nvme_keyring 00:39:39.700 11:21:45 compress_isal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:39:39.700 11:21:45 compress_isal -- nvmf/common.sh@124 -- # set -e 00:39:39.700 11:21:45 compress_isal -- nvmf/common.sh@125 -- # return 0 00:39:39.700 11:21:45 compress_isal -- nvmf/common.sh@489 -- # '[' -n 3819057 ']' 00:39:39.700 11:21:45 compress_isal -- nvmf/common.sh@490 -- # killprocess 3819057 00:39:39.700 11:21:45 compress_isal -- common/autotest_common.sh@950 -- # '[' -z 3819057 ']' 00:39:39.700 11:21:45 compress_isal -- common/autotest_common.sh@954 -- # kill -0 3819057 00:39:39.700 11:21:45 compress_isal -- common/autotest_common.sh@955 -- # uname 00:39:39.700 11:21:45 compress_isal -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:39.700 11:21:45 compress_isal -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3819057 00:39:39.700 11:21:45 compress_isal -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:39.700 11:21:45 compress_isal -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:39.700 11:21:45 compress_isal -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3819057' 00:39:39.700 killing process with pid 3819057 00:39:39.700 11:21:45 compress_isal -- common/autotest_common.sh@969 -- # kill 3819057 00:39:39.700 11:21:45 compress_isal -- common/autotest_common.sh@974 -- # wait 3819057 00:39:42.228 11:21:49 compress_isal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:39:42.228 11:21:49 compress_isal -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:39:42.228 11:21:49 compress_isal -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:39:42.228 11:21:49 compress_isal -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:39:42.228 11:21:49 compress_isal -- nvmf/common.sh@278 -- # remove_spdk_ns 00:39:42.228 11:21:49 compress_isal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:42.228 11:21:49 compress_isal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:42.228 11:21:49 compress_isal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:42.228 11:21:49 compress_isal -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:39:42.228 11:21:49 compress_isal -- compress/compress.sh@120 -- # rm -rf /tmp/pmem 00:39:42.228 00:39:42.228 real 2m21.829s 00:39:42.228 user 6m18.554s 00:39:42.228 sys 0m20.087s 00:39:42.228 11:21:49 compress_isal -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:42.228 11:21:49 compress_isal -- common/autotest_common.sh@10 -- # set +x 00:39:42.228 ************************************ 00:39:42.228 END TEST compress_isal 00:39:42.228 ************************************ 00:39:42.228 11:21:49 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:39:42.228 11:21:49 -- spdk/autotest.sh@360 -- # '[' 1 -eq 1 ']' 00:39:42.228 11:21:49 -- spdk/autotest.sh@361 -- # run_test blockdev_crypto_aesni /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:39:42.228 11:21:49 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:42.228 11:21:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:42.228 11:21:49 -- common/autotest_common.sh@10 -- # set +x 00:39:42.228 ************************************ 00:39:42.228 START TEST blockdev_crypto_aesni 00:39:42.228 ************************************ 00:39:42.228 11:21:49 blockdev_crypto_aesni -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_aesni 00:39:42.487 * Looking for test storage... 00:39:42.487 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/nbd_common.sh@6 -- # set -e 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@20 -- # : 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # uname -s 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@681 -- # test_type=crypto_aesni 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@682 -- # crypto_device= 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@683 -- # dek= 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@684 -- # env_ctx= 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == bdev ]] 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@689 -- # [[ crypto_aesni == crypto_* ]] 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3826101 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:39:42.487 11:21:49 blockdev_crypto_aesni -- bdev/blockdev.sh@49 -- # waitforlisten 3826101 00:39:42.487 11:21:49 blockdev_crypto_aesni -- common/autotest_common.sh@831 -- # '[' -z 3826101 ']' 00:39:42.487 11:21:49 blockdev_crypto_aesni -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:42.487 11:21:49 blockdev_crypto_aesni -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:42.487 11:21:49 blockdev_crypto_aesni -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:42.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:42.487 11:21:49 blockdev_crypto_aesni -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:42.487 11:21:49 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:42.487 [2024-07-25 11:21:49.568430] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:42.487 [2024-07-25 11:21:49.568551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3826101 ] 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:42.746 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:42.746 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:42.746 [2024-07-25 11:21:49.792618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:43.004 [2024-07-25 11:21:50.063372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.570 11:21:50 blockdev_crypto_aesni -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:39:43.570 11:21:50 blockdev_crypto_aesni -- common/autotest_common.sh@864 -- # return 0 00:39:43.570 11:21:50 blockdev_crypto_aesni -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:39:43.570 11:21:50 blockdev_crypto_aesni -- bdev/blockdev.sh@704 -- # setup_crypto_aesni_conf 00:39:43.570 11:21:50 blockdev_crypto_aesni -- bdev/blockdev.sh@145 -- # rpc_cmd 00:39:43.571 11:21:50 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:43.571 11:21:50 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:43.571 [2024-07-25 11:21:50.417062] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:39:43.571 [2024-07-25 11:21:50.425120] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:39:43.571 [2024-07-25 11:21:50.433134] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:39:43.829 [2024-07-25 11:21:50.777044] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:39:47.114 true 00:39:47.114 true 00:39:47.115 true 00:39:47.115 true 00:39:47.373 Malloc0 00:39:47.373 Malloc1 00:39:47.373 Malloc2 00:39:47.373 Malloc3 00:39:47.373 [2024-07-25 11:21:54.468666] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:39:47.373 crypto_ram 00:39:47.373 [2024-07-25 11:21:54.476816] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:39:47.373 crypto_ram2 00:39:47.373 [2024-07-25 11:21:54.484970] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:39:47.373 crypto_ram3 00:39:47.631 [2024-07-25 11:21:54.493010] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:39:47.631 crypto_ram4 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.631 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.631 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # cat 00:39:47.631 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.631 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.631 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.631 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:39:47.631 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@561 -- # xtrace_disable 00:39:47.631 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:47.631 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:39:47.631 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:39:47.631 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # jq -r .name 00:39:47.632 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "1a86306c-1913-5564-aee0-26296b3cc3ec"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1a86306c-1913-5564-aee0-26296b3cc3ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "3ad6a422-7d39-5087-a13d-d3f9ac44ef33"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3ad6a422-7d39-5087-a13d-d3f9ac44ef33",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "91cdcba5-e4b5-5a07-979a-a12277a78aac"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "91cdcba5-e4b5-5a07-979a-a12277a78aac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "9b8ad898-2261-5f46-a96f-0e7f78a88594"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "9b8ad898-2261-5f46-a96f-0e7f78a88594",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:39:47.632 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:39:47.632 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:39:47.632 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:39:47.632 11:21:54 blockdev_crypto_aesni -- bdev/blockdev.sh@753 -- # killprocess 3826101 00:39:47.632 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@950 -- # '[' -z 3826101 ']' 00:39:47.632 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@954 -- # kill -0 3826101 00:39:47.632 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # uname 00:39:47.632 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:39:47.632 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3826101 00:39:47.890 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:39:47.890 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:39:47.890 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3826101' 00:39:47.890 killing process with pid 3826101 00:39:47.890 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@969 -- # kill 3826101 00:39:47.890 11:21:54 blockdev_crypto_aesni -- common/autotest_common.sh@974 -- # wait 3826101 00:39:52.126 11:21:58 blockdev_crypto_aesni -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:52.126 11:21:58 blockdev_crypto_aesni -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:39:52.126 11:21:58 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:39:52.126 11:21:58 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:52.126 11:21:58 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:52.126 ************************************ 00:39:52.126 START TEST bdev_hello_world 00:39:52.126 ************************************ 00:39:52.126 11:21:58 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:39:52.126 [2024-07-25 11:21:59.060514] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:52.126 [2024-07-25 11:21:59.060623] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827607 ] 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:52.126 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.126 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:52.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.127 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:52.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.127 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:52.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.127 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:52.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.127 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:52.127 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:52.127 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:52.385 [2024-07-25 11:21:59.285536] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:52.643 [2024-07-25 11:21:59.563113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:52.643 [2024-07-25 11:21:59.584896] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:39:52.643 [2024-07-25 11:21:59.592923] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:39:52.643 [2024-07-25 11:21:59.600928] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:39:52.901 [2024-07-25 11:21:59.959794] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:39:56.181 [2024-07-25 11:22:02.833849] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:39:56.181 [2024-07-25 11:22:02.833932] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:39:56.181 [2024-07-25 11:22:02.833957] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:56.181 [2024-07-25 11:22:02.841862] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:39:56.181 [2024-07-25 11:22:02.841905] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:39:56.181 [2024-07-25 11:22:02.841922] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:56.181 [2024-07-25 11:22:02.849894] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:39:56.181 [2024-07-25 11:22:02.849936] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:39:56.181 [2024-07-25 11:22:02.849952] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:56.181 [2024-07-25 11:22:02.857898] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:39:56.182 [2024-07-25 11:22:02.857934] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:39:56.182 [2024-07-25 11:22:02.857949] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:39:56.182 [2024-07-25 11:22:03.110374] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:39:56.182 [2024-07-25 11:22:03.110424] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:39:56.182 [2024-07-25 11:22:03.110450] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:39:56.182 [2024-07-25 11:22:03.112726] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:39:56.182 [2024-07-25 11:22:03.112837] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:39:56.182 [2024-07-25 11:22:03.112861] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:39:56.182 [2024-07-25 11:22:03.112937] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:39:56.182 00:39:56.182 [2024-07-25 11:22:03.112966] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:39:58.720 00:39:58.720 real 0m6.706s 00:39:58.720 user 0m6.129s 00:39:58.720 sys 0m0.525s 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:39:58.720 ************************************ 00:39:58.720 END TEST bdev_hello_world 00:39:58.720 ************************************ 00:39:58.720 11:22:05 blockdev_crypto_aesni -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:39:58.720 11:22:05 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:39:58.720 11:22:05 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:39:58.720 11:22:05 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:39:58.720 ************************************ 00:39:58.720 START TEST bdev_bounds 00:39:58.720 ************************************ 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=3828763 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 3828763' 00:39:58.720 Process bdevio pid: 3828763 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 3828763 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 3828763 ']' 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:58.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:39:58.720 11:22:05 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:58.979 [2024-07-25 11:22:05.856208] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:39:58.979 [2024-07-25 11:22:05.856320] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828763 ] 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:01.0 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:01.1 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:01.2 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:01.3 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:01.4 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:01.5 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:01.6 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:01.7 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:02.0 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:02.1 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:02.2 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:02.3 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:02.4 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:02.5 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:02.6 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3d:02.7 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:01.0 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:01.1 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:01.2 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:01.3 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:01.4 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:01.5 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:01.6 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:01.7 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:02.0 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:02.1 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:02.2 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:02.3 cannot be used 00:39:58.979 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.979 EAL: Requested device 0000:3f:02.4 cannot be used 00:39:58.980 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.980 EAL: Requested device 0000:3f:02.5 cannot be used 00:39:58.980 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.980 EAL: Requested device 0000:3f:02.6 cannot be used 00:39:58.980 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:39:58.980 EAL: Requested device 0000:3f:02.7 cannot be used 00:39:58.980 [2024-07-25 11:22:06.086518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:59.546 [2024-07-25 11:22:06.372477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:39:59.546 [2024-07-25 11:22:06.372546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:39:59.546 [2024-07-25 11:22:06.372548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:39:59.546 [2024-07-25 11:22:06.394448] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:39:59.546 [2024-07-25 11:22:06.402439] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:39:59.546 [2024-07-25 11:22:06.410468] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:39:59.806 [2024-07-25 11:22:06.784455] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:40:03.089 [2024-07-25 11:22:09.660681] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:40:03.089 [2024-07-25 11:22:09.660762] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:40:03.089 [2024-07-25 11:22:09.660781] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:03.089 [2024-07-25 11:22:09.668700] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:40:03.089 [2024-07-25 11:22:09.668742] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:40:03.089 [2024-07-25 11:22:09.668758] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:03.089 [2024-07-25 11:22:09.676744] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:40:03.089 [2024-07-25 11:22:09.676800] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:40:03.089 [2024-07-25 11:22:09.676816] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:03.089 [2024-07-25 11:22:09.684735] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:40:03.089 [2024-07-25 11:22:09.684768] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:40:03.089 [2024-07-25 11:22:09.684783] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:03.089 11:22:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:03.089 11:22:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:40:03.089 11:22:10 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:40:03.347 I/O targets: 00:40:03.347 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:40:03.347 crypto_ram2: 65536 blocks of 512 bytes (32 MiB) 00:40:03.348 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:40:03.348 crypto_ram4: 8192 blocks of 4096 bytes (32 MiB) 00:40:03.348 00:40:03.348 00:40:03.348 CUnit - A unit testing framework for C - Version 2.1-3 00:40:03.348 http://cunit.sourceforge.net/ 00:40:03.348 00:40:03.348 00:40:03.348 Suite: bdevio tests on: crypto_ram4 00:40:03.348 Test: blockdev write read block ...passed 00:40:03.348 Test: blockdev write zeroes read block ...passed 00:40:03.348 Test: blockdev write zeroes read no split ...passed 00:40:03.348 Test: blockdev write zeroes read split ...passed 00:40:03.348 Test: blockdev write zeroes read split partial ...passed 00:40:03.348 Test: blockdev reset ...passed 00:40:03.348 Test: blockdev write read 8 blocks ...passed 00:40:03.348 Test: blockdev write read size > 128k ...passed 00:40:03.348 Test: blockdev write read invalid size ...passed 00:40:03.348 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:03.348 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:03.348 Test: blockdev write read max offset ...passed 00:40:03.348 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:03.348 Test: blockdev writev readv 8 blocks ...passed 00:40:03.348 Test: blockdev writev readv 30 x 1block ...passed 00:40:03.348 Test: blockdev writev readv block ...passed 00:40:03.348 Test: blockdev writev readv size > 128k ...passed 00:40:03.348 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:03.348 Test: blockdev comparev and writev ...passed 00:40:03.348 Test: blockdev nvme passthru rw ...passed 00:40:03.348 Test: blockdev nvme passthru vendor specific ...passed 00:40:03.348 Test: blockdev nvme admin passthru ...passed 00:40:03.348 Test: blockdev copy ...passed 00:40:03.348 Suite: bdevio tests on: crypto_ram3 00:40:03.348 Test: blockdev write read block ...passed 00:40:03.348 Test: blockdev write zeroes read block ...passed 00:40:03.348 Test: blockdev write zeroes read no split ...passed 00:40:03.348 Test: blockdev write zeroes read split ...passed 00:40:03.605 Test: blockdev write zeroes read split partial ...passed 00:40:03.605 Test: blockdev reset ...passed 00:40:03.605 Test: blockdev write read 8 blocks ...passed 00:40:03.605 Test: blockdev write read size > 128k ...passed 00:40:03.605 Test: blockdev write read invalid size ...passed 00:40:03.605 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:03.605 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:03.605 Test: blockdev write read max offset ...passed 00:40:03.605 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:03.605 Test: blockdev writev readv 8 blocks ...passed 00:40:03.605 Test: blockdev writev readv 30 x 1block ...passed 00:40:03.605 Test: blockdev writev readv block ...passed 00:40:03.605 Test: blockdev writev readv size > 128k ...passed 00:40:03.605 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:03.605 Test: blockdev comparev and writev ...passed 00:40:03.605 Test: blockdev nvme passthru rw ...passed 00:40:03.605 Test: blockdev nvme passthru vendor specific ...passed 00:40:03.605 Test: blockdev nvme admin passthru ...passed 00:40:03.605 Test: blockdev copy ...passed 00:40:03.605 Suite: bdevio tests on: crypto_ram2 00:40:03.605 Test: blockdev write read block ...passed 00:40:03.606 Test: blockdev write zeroes read block ...passed 00:40:03.606 Test: blockdev write zeroes read no split ...passed 00:40:03.606 Test: blockdev write zeroes read split ...passed 00:40:03.606 Test: blockdev write zeroes read split partial ...passed 00:40:03.606 Test: blockdev reset ...passed 00:40:03.606 Test: blockdev write read 8 blocks ...passed 00:40:03.606 Test: blockdev write read size > 128k ...passed 00:40:03.606 Test: blockdev write read invalid size ...passed 00:40:03.606 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:03.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:03.606 Test: blockdev write read max offset ...passed 00:40:03.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:03.606 Test: blockdev writev readv 8 blocks ...passed 00:40:03.606 Test: blockdev writev readv 30 x 1block ...passed 00:40:03.606 Test: blockdev writev readv block ...passed 00:40:03.606 Test: blockdev writev readv size > 128k ...passed 00:40:03.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:03.606 Test: blockdev comparev and writev ...passed 00:40:03.606 Test: blockdev nvme passthru rw ...passed 00:40:03.606 Test: blockdev nvme passthru vendor specific ...passed 00:40:03.606 Test: blockdev nvme admin passthru ...passed 00:40:03.606 Test: blockdev copy ...passed 00:40:03.606 Suite: bdevio tests on: crypto_ram 00:40:03.606 Test: blockdev write read block ...passed 00:40:03.606 Test: blockdev write zeroes read block ...passed 00:40:03.606 Test: blockdev write zeroes read no split ...passed 00:40:03.864 Test: blockdev write zeroes read split ...passed 00:40:03.864 Test: blockdev write zeroes read split partial ...passed 00:40:03.864 Test: blockdev reset ...passed 00:40:03.864 Test: blockdev write read 8 blocks ...passed 00:40:03.864 Test: blockdev write read size > 128k ...passed 00:40:03.864 Test: blockdev write read invalid size ...passed 00:40:03.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:03.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:03.864 Test: blockdev write read max offset ...passed 00:40:03.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:03.864 Test: blockdev writev readv 8 blocks ...passed 00:40:03.864 Test: blockdev writev readv 30 x 1block ...passed 00:40:03.864 Test: blockdev writev readv block ...passed 00:40:03.864 Test: blockdev writev readv size > 128k ...passed 00:40:03.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:03.864 Test: blockdev comparev and writev ...passed 00:40:03.864 Test: blockdev nvme passthru rw ...passed 00:40:03.864 Test: blockdev nvme passthru vendor specific ...passed 00:40:03.864 Test: blockdev nvme admin passthru ...passed 00:40:03.864 Test: blockdev copy ...passed 00:40:03.864 00:40:03.864 Run Summary: Type Total Ran Passed Failed Inactive 00:40:03.864 suites 4 4 n/a 0 0 00:40:03.864 tests 92 92 92 0 0 00:40:03.864 asserts 520 520 520 0 n/a 00:40:03.864 00:40:03.864 Elapsed time = 1.488 seconds 00:40:03.864 0 00:40:03.864 11:22:10 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 3828763 00:40:03.864 11:22:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 3828763 ']' 00:40:03.864 11:22:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 3828763 00:40:03.864 11:22:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:40:03.864 11:22:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:03.864 11:22:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3828763 00:40:03.864 11:22:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:03.864 11:22:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:03.864 11:22:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3828763' 00:40:03.864 killing process with pid 3828763 00:40:03.864 11:22:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@969 -- # kill 3828763 00:40:03.864 11:22:10 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@974 -- # wait 3828763 00:40:06.397 11:22:13 blockdev_crypto_aesni.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:40:06.397 00:40:06.397 real 0m7.753s 00:40:06.397 user 0m20.974s 00:40:06.397 sys 0m0.798s 00:40:06.397 11:22:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:06.397 11:22:13 blockdev_crypto_aesni.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:40:06.397 ************************************ 00:40:06.397 END TEST bdev_bounds 00:40:06.397 ************************************ 00:40:06.654 11:22:13 blockdev_crypto_aesni -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:40:06.654 11:22:13 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:40:06.654 11:22:13 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:06.654 11:22:13 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:40:06.654 ************************************ 00:40:06.654 START TEST bdev_nbd 00:40:06.654 ************************************ 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '' 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=3829968 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 3829968 /var/tmp/spdk-nbd.sock 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 3829968 ']' 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:40:06.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:40:06.654 11:22:13 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:40:06.654 [2024-07-25 11:22:13.687343] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:06.655 [2024-07-25 11:22:13.687448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.913 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:06.913 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.914 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:06.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.914 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:06.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.914 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:06.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.914 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:06.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.914 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:06.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.914 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:06.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.914 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:06.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.914 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:06.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.914 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:06.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.914 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:06.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.914 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:06.914 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:06.914 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:06.914 [2024-07-25 11:22:13.889087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:07.172 [2024-07-25 11:22:14.161623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:07.172 [2024-07-25 11:22:14.183404] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:40:07.172 [2024-07-25 11:22:14.191435] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:40:07.172 [2024-07-25 11:22:14.199460] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:40:07.800 [2024-07-25 11:22:14.591115] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:40:11.084 [2024-07-25 11:22:17.496829] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:40:11.084 [2024-07-25 11:22:17.496911] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:40:11.084 [2024-07-25 11:22:17.496935] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:11.084 [2024-07-25 11:22:17.504844] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:40:11.084 [2024-07-25 11:22:17.504887] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:40:11.084 [2024-07-25 11:22:17.504905] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:11.084 [2024-07-25 11:22:17.512879] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:40:11.084 [2024-07-25 11:22:17.512922] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:40:11.084 [2024-07-25 11:22:17.512938] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:11.084 [2024-07-25 11:22:17.520858] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:40:11.084 [2024-07-25 11:22:17.520915] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:40:11.084 [2024-07-25 11:22:17.520931] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:40:11.084 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:40:11.084 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:40:11.084 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:40:11.084 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:11.084 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:11.084 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:40:11.084 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' 00:40:11.084 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:11.084 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:11.085 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:40:11.085 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:40:11.085 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:40:11.085 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:40:11.085 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:40:11.085 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:11.342 1+0 records in 00:40:11.342 1+0 records out 00:40:11.342 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319533 s, 12.8 MB/s 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:40:11.342 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:40:11.600 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:40:11.600 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:40:11.600 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:40:11.600 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:40:11.600 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:11.600 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:11.600 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:11.600 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:40:11.601 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:11.601 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:11.601 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:11.601 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:11.601 1+0 records in 00:40:11.601 1+0 records out 00:40:11.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301287 s, 13.6 MB/s 00:40:11.601 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:11.601 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:11.601 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:11.601 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:11.601 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:11.601 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:11.601 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:40:11.601 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:11.859 1+0 records in 00:40:11.859 1+0 records out 00:40:11.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336474 s, 12.2 MB/s 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:40:11.859 11:22:18 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:12.118 1+0 records in 00:40:12.118 1+0 records out 00:40:12.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324992 s, 12.6 MB/s 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:40:12.118 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:12.376 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:40:12.376 { 00:40:12.376 "nbd_device": "/dev/nbd0", 00:40:12.376 "bdev_name": "crypto_ram" 00:40:12.376 }, 00:40:12.376 { 00:40:12.376 "nbd_device": "/dev/nbd1", 00:40:12.376 "bdev_name": "crypto_ram2" 00:40:12.376 }, 00:40:12.376 { 00:40:12.376 "nbd_device": "/dev/nbd2", 00:40:12.376 "bdev_name": "crypto_ram3" 00:40:12.376 }, 00:40:12.376 { 00:40:12.376 "nbd_device": "/dev/nbd3", 00:40:12.376 "bdev_name": "crypto_ram4" 00:40:12.376 } 00:40:12.376 ]' 00:40:12.376 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:40:12.376 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:40:12.376 { 00:40:12.376 "nbd_device": "/dev/nbd0", 00:40:12.376 "bdev_name": "crypto_ram" 00:40:12.376 }, 00:40:12.376 { 00:40:12.376 "nbd_device": "/dev/nbd1", 00:40:12.376 "bdev_name": "crypto_ram2" 00:40:12.376 }, 00:40:12.376 { 00:40:12.376 "nbd_device": "/dev/nbd2", 00:40:12.376 "bdev_name": "crypto_ram3" 00:40:12.376 }, 00:40:12.376 { 00:40:12.376 "nbd_device": "/dev/nbd3", 00:40:12.376 "bdev_name": "crypto_ram4" 00:40:12.376 } 00:40:12.376 ]' 00:40:12.376 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:40:12.377 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:40:12.377 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:12.377 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:40:12.377 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:12.377 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:12.377 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:12.377 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:12.635 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:12.635 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:12.635 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:12.635 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:12.635 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:12.635 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:12.635 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:12.635 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:12.635 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:12.635 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:40:12.893 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:12.893 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:12.893 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:12.893 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:12.893 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:12.893 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:12.893 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:12.893 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:12.893 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:12.893 11:22:19 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:40:13.151 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:40:13.151 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:40:13.151 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:40:13.151 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:13.151 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:13.151 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:40:13.151 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:13.151 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:13.151 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:13.151 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:40:13.409 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:40:13.409 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:40:13.409 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:40:13.409 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:13.409 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:13.409 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:40:13.409 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:13.409 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:13.409 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:13.409 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:13.409 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram2 crypto_ram3 crypto_ram4' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram2' 'crypto_ram3' 'crypto_ram4') 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:13.668 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:40:13.927 /dev/nbd0 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:13.927 1+0 records in 00:40:13.927 1+0 records out 00:40:13.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345203 s, 11.9 MB/s 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:13.927 11:22:20 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd1 00:40:14.185 /dev/nbd1 00:40:14.185 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:40:14.185 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:40:14.185 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:40:14.185 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:14.185 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:14.185 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:14.185 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:40:14.185 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:14.185 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:14.185 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:14.186 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:14.186 1+0 records in 00:40:14.186 1+0 records out 00:40:14.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337173 s, 12.1 MB/s 00:40:14.186 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:14.186 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:14.186 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:14.186 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:14.186 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:14.186 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:14.186 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:14.186 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd10 00:40:14.445 /dev/nbd10 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:14.445 1+0 records in 00:40:14.445 1+0 records out 00:40:14.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391183 s, 10.5 MB/s 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:14.445 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram4 /dev/nbd11 00:40:14.704 /dev/nbd11 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:40:14.704 1+0 records in 00:40:14.704 1+0 records out 00:40:14.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340565 s, 12.0 MB/s 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:40:14.704 { 00:40:14.704 "nbd_device": "/dev/nbd0", 00:40:14.704 "bdev_name": "crypto_ram" 00:40:14.704 }, 00:40:14.704 { 00:40:14.704 "nbd_device": "/dev/nbd1", 00:40:14.704 "bdev_name": "crypto_ram2" 00:40:14.704 }, 00:40:14.704 { 00:40:14.704 "nbd_device": "/dev/nbd10", 00:40:14.704 "bdev_name": "crypto_ram3" 00:40:14.704 }, 00:40:14.704 { 00:40:14.704 "nbd_device": "/dev/nbd11", 00:40:14.704 "bdev_name": "crypto_ram4" 00:40:14.704 } 00:40:14.704 ]' 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:40:14.704 { 00:40:14.704 "nbd_device": "/dev/nbd0", 00:40:14.704 "bdev_name": "crypto_ram" 00:40:14.704 }, 00:40:14.704 { 00:40:14.704 "nbd_device": "/dev/nbd1", 00:40:14.704 "bdev_name": "crypto_ram2" 00:40:14.704 }, 00:40:14.704 { 00:40:14.704 "nbd_device": "/dev/nbd10", 00:40:14.704 "bdev_name": "crypto_ram3" 00:40:14.704 }, 00:40:14.704 { 00:40:14.704 "nbd_device": "/dev/nbd11", 00:40:14.704 "bdev_name": "crypto_ram4" 00:40:14.704 } 00:40:14.704 ]' 00:40:14.704 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:14.962 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:40:14.962 /dev/nbd1 00:40:14.962 /dev/nbd10 00:40:14.962 /dev/nbd11' 00:40:14.962 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:40:14.962 /dev/nbd1 00:40:14.962 /dev/nbd10 00:40:14.962 /dev/nbd11' 00:40:14.962 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:14.962 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:40:14.962 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:40:14.962 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:40:14.963 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:40:14.963 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:40:14.963 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:14.963 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:14.963 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:40:14.963 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:40:14.963 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:40:14.963 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:40:14.963 256+0 records in 00:40:14.963 256+0 records out 00:40:14.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111722 s, 93.9 MB/s 00:40:14.963 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:14.963 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:40:14.963 256+0 records in 00:40:14.963 256+0 records out 00:40:14.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0576861 s, 18.2 MB/s 00:40:14.963 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:14.963 11:22:21 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:40:14.963 256+0 records in 00:40:14.963 256+0 records out 00:40:14.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0645536 s, 16.2 MB/s 00:40:14.963 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:14.963 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:40:14.963 256+0 records in 00:40:14.963 256+0 records out 00:40:14.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0417472 s, 25.1 MB/s 00:40:14.963 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:40:14.963 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:40:15.221 256+0 records in 00:40:15.221 256+0 records out 00:40:15.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0383783 s, 27.3 MB/s 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:15.221 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:15.222 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:15.222 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:15.480 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:15.480 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:15.480 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:15.480 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:15.480 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:15.480 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:15.480 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:15.480 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:15.480 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:15.480 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:40:15.738 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:40:15.738 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:40:15.738 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:40:15.738 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:15.738 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:15.738 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:40:15.738 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:15.738 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:15.738 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:15.738 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:40:15.997 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:40:15.997 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:40:15.997 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:40:15.997 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:15.997 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:15.997 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:40:15.997 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:15.997 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:15.997 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:15.997 11:22:22 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:40:16.256 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:40:16.256 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:40:16.256 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:40:16.256 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:16.256 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:16.256 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:40:16.256 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:16.256 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:16.256 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:40:16.256 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:16.256 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:40:16.514 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:40:16.772 malloc_lvol_verify 00:40:16.772 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:40:17.029 ee74ec77-83f6-4672-b89e-da609c198c51 00:40:17.029 11:22:23 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:40:17.029 f716f745-8b49-4261-a233-bf1644ce1a67 00:40:17.287 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:40:17.287 /dev/nbd0 00:40:17.287 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:40:17.287 mke2fs 1.46.5 (30-Dec-2021) 00:40:17.545 Discarding device blocks: 0/4096 done 00:40:17.545 Creating filesystem with 4096 1k blocks and 1024 inodes 00:40:17.545 00:40:17.545 Allocating group tables: 0/1 done 00:40:17.545 Writing inode tables: 0/1 done 00:40:17.545 Creating journal (1024 blocks): done 00:40:17.545 Writing superblocks and filesystem accounting information: 0/1 done 00:40:17.545 00:40:17.545 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:40:17.545 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:40:17.545 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:40:17.545 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:40:17.545 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:40:17.545 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:40:17.545 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:40:17.545 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:40:17.545 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 3829968 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 3829968 ']' 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 3829968 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3829968 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3829968' 00:40:17.803 killing process with pid 3829968 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@969 -- # kill 3829968 00:40:17.803 11:22:24 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@974 -- # wait 3829968 00:40:20.332 11:22:27 blockdev_crypto_aesni.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:40:20.332 00:40:20.332 real 0m13.791s 00:40:20.332 user 0m16.671s 00:40:20.332 sys 0m4.077s 00:40:20.332 11:22:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:20.332 11:22:27 blockdev_crypto_aesni.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:40:20.332 ************************************ 00:40:20.332 END TEST bdev_nbd 00:40:20.332 ************************************ 00:40:20.332 11:22:27 blockdev_crypto_aesni -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:40:20.332 11:22:27 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = nvme ']' 00:40:20.332 11:22:27 blockdev_crypto_aesni -- bdev/blockdev.sh@763 -- # '[' crypto_aesni = gpt ']' 00:40:20.332 11:22:27 blockdev_crypto_aesni -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:40:20.332 11:22:27 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:40:20.332 11:22:27 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:20.332 11:22:27 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:40:20.591 ************************************ 00:40:20.591 START TEST bdev_fio 00:40:20.591 ************************************ 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:40:20.591 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram4]' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram4 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:20.591 ************************************ 00:40:20.591 START TEST bdev_fio_rw_verify 00:40:20.591 ************************************ 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:20.591 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:40:20.592 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:40:20.592 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:20.592 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:20.592 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:20.592 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:40:20.592 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:20.592 11:22:27 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:21.181 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:21.181 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:21.181 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:21.181 job_crypto_ram4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:21.181 fio-3.35 00:40:21.181 Starting 4 threads 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:21.181 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:21.181 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:36.092 00:40:36.092 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3833338: Thu Jul 25 11:22:42 2024 00:40:36.092 read: IOPS=22.9k, BW=89.5MiB/s (93.9MB/s)(895MiB/10001msec) 00:40:36.092 slat (usec): min=18, max=537, avg=56.63, stdev=34.34 00:40:36.092 clat (usec): min=14, max=2242, avg=317.48, stdev=209.38 00:40:36.092 lat (usec): min=52, max=2649, avg=374.12, stdev=230.95 00:40:36.092 clat percentiles (usec): 00:40:36.092 | 50.000th=[ 265], 99.000th=[ 1012], 99.900th=[ 1156], 99.990th=[ 1254], 00:40:36.092 | 99.999th=[ 2008] 00:40:36.092 write: IOPS=25.3k, BW=98.9MiB/s (104MB/s)(961MiB/9718msec); 0 zone resets 00:40:36.092 slat (usec): min=26, max=405, avg=68.84, stdev=33.62 00:40:36.092 clat (usec): min=35, max=2878, avg=383.55, stdev=248.41 00:40:36.092 lat (usec): min=79, max=3284, avg=452.39, stdev=268.66 00:40:36.092 clat percentiles (usec): 00:40:36.092 | 50.000th=[ 334], 99.000th=[ 1254], 99.900th=[ 1418], 99.990th=[ 1745], 00:40:36.092 | 99.999th=[ 2704] 00:40:36.092 bw ( KiB/s): min=84896, max=123184, per=96.88%, avg=98146.95, stdev=2370.10, samples=76 00:40:36.092 iops : min=21224, max=30796, avg=24536.74, stdev=592.52, samples=76 00:40:36.092 lat (usec) : 20=0.01%, 50=0.01%, 100=6.31%, 250=32.94%, 500=41.69% 00:40:36.092 lat (usec) : 750=11.43%, 1000=5.42% 00:40:36.092 lat (msec) : 2=2.20%, 4=0.01% 00:40:36.092 cpu : usr=99.21%, sys=0.27%, ctx=66, majf=0, minf=24517 00:40:36.092 IO depths : 1=10.3%, 2=25.5%, 4=51.1%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:36.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:36.092 complete : 0=0.0%, 4=88.7%, 8=11.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:36.092 issued rwts: total=229210,246136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:36.092 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:36.092 00:40:36.093 Run status group 0 (all jobs): 00:40:36.093 READ: bw=89.5MiB/s (93.9MB/s), 89.5MiB/s-89.5MiB/s (93.9MB/s-93.9MB/s), io=895MiB (939MB), run=10001-10001msec 00:40:36.093 WRITE: bw=98.9MiB/s (104MB/s), 98.9MiB/s-98.9MiB/s (104MB/s-104MB/s), io=961MiB (1008MB), run=9718-9718msec 00:40:37.999 ----------------------------------------------------- 00:40:37.999 Suppressions used: 00:40:37.999 count bytes template 00:40:37.999 4 47 /usr/src/fio/parse.c 00:40:37.999 2338 224448 /usr/src/fio/iolog.c 00:40:37.999 1 8 libtcmalloc_minimal.so 00:40:37.999 1 904 libcrypto.so 00:40:37.999 ----------------------------------------------------- 00:40:37.999 00:40:37.999 00:40:37.999 real 0m17.214s 00:40:37.999 user 0m57.066s 00:40:37.999 sys 0m0.899s 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:40:37.999 ************************************ 00:40:37.999 END TEST bdev_fio_rw_verify 00:40:37.999 ************************************ 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:40:37.999 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "1a86306c-1913-5564-aee0-26296b3cc3ec"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1a86306c-1913-5564-aee0-26296b3cc3ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "3ad6a422-7d39-5087-a13d-d3f9ac44ef33"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3ad6a422-7d39-5087-a13d-d3f9ac44ef33",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "91cdcba5-e4b5-5a07-979a-a12277a78aac"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "91cdcba5-e4b5-5a07-979a-a12277a78aac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "9b8ad898-2261-5f46-a96f-0e7f78a88594"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "9b8ad898-2261-5f46-a96f-0e7f78a88594",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:40:38.000 crypto_ram2 00:40:38.000 crypto_ram3 00:40:38.000 crypto_ram4 ]] 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "1a86306c-1913-5564-aee0-26296b3cc3ec"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1a86306c-1913-5564-aee0-26296b3cc3ec",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_aesni_cbc_1"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "3ad6a422-7d39-5087-a13d-d3f9ac44ef33"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "3ad6a422-7d39-5087-a13d-d3f9ac44ef33",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_aesni_cbc_2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "91cdcba5-e4b5-5a07-979a-a12277a78aac"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "91cdcba5-e4b5-5a07-979a-a12277a78aac",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_aesni_cbc_3"' ' }' ' }' '}' '{' ' "name": "crypto_ram4",' ' "aliases": [' ' "9b8ad898-2261-5f46-a96f-0e7f78a88594"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "9b8ad898-2261-5f46-a96f-0e7f78a88594",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram4",' ' "key_name": "test_dek_aesni_cbc_4"' ' }' ' }' '}' 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram4]' 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram4 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:38.000 ************************************ 00:40:38.000 START TEST bdev_fio_trim 00:40:38.000 ************************************ 00:40:38.000 11:22:44 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:38.001 11:22:44 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:38.001 11:22:44 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:40:38.001 11:22:44 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:38.001 11:22:44 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:40:38.001 11:22:44 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:40:38.001 11:22:44 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:40:38.001 11:22:44 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:40:38.001 11:22:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:40:38.001 11:22:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:40:38.001 11:22:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:40:38.001 11:22:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:40:38.001 11:22:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:40:38.001 11:22:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:40:38.001 11:22:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:40:38.001 11:22:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:40:38.001 11:22:45 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:40:38.598 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:38.598 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:38.598 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:38.598 job_crypto_ram4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:40:38.598 fio-3.35 00:40:38.598 Starting 4 threads 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:38.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:38.598 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:53.464 00:40:53.464 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3836124: Thu Jul 25 11:22:59 2024 00:40:53.464 write: IOPS=44.8k, BW=175MiB/s (183MB/s)(1749MiB/10001msec); 0 zone resets 00:40:53.464 slat (usec): min=12, max=294, avg=51.01, stdev=33.68 00:40:53.464 clat (usec): min=40, max=1893, avg=227.86, stdev=172.39 00:40:53.464 lat (usec): min=59, max=1988, avg=278.87, stdev=196.40 00:40:53.464 clat percentiles (usec): 00:40:53.464 | 50.000th=[ 176], 99.000th=[ 848], 99.900th=[ 955], 99.990th=[ 1045], 00:40:53.464 | 99.999th=[ 1663] 00:40:53.464 bw ( KiB/s): min=173016, max=218144, per=100.00%, avg=179525.68, stdev=3556.20, samples=76 00:40:53.464 iops : min=43254, max=54536, avg=44881.37, stdev=889.02, samples=76 00:40:53.464 trim: IOPS=44.8k, BW=175MiB/s (183MB/s)(1749MiB/10001msec); 0 zone resets 00:40:53.464 slat (usec): min=4, max=445, avg=13.28, stdev= 5.83 00:40:53.464 clat (usec): min=59, max=1154, avg=215.35, stdev=108.84 00:40:53.464 lat (usec): min=65, max=1189, avg=228.63, stdev=111.22 00:40:53.464 clat percentiles (usec): 00:40:53.464 | 50.000th=[ 192], 99.000th=[ 586], 99.900th=[ 660], 99.990th=[ 725], 00:40:53.464 | 99.999th=[ 1090] 00:40:53.464 bw ( KiB/s): min=173024, max=218138, per=100.00%, avg=179526.95, stdev=3556.87, samples=76 00:40:53.464 iops : min=43256, max=54534, avg=44881.68, stdev=889.18, samples=76 00:40:53.464 lat (usec) : 50=1.18%, 100=12.85%, 250=58.49%, 500=21.34%, 750=4.93% 00:40:53.464 lat (usec) : 1000=1.20% 00:40:53.464 lat (msec) : 2=0.01% 00:40:53.464 cpu : usr=99.49%, sys=0.07%, ctx=72, majf=0, minf=7681 00:40:53.464 IO depths : 1=8.6%, 2=26.1%, 4=52.2%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:53.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.464 complete : 0=0.0%, 4=88.5%, 8=11.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.464 issued rwts: total=0,447797,447798,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.464 latency : target=0, window=0, percentile=100.00%, depth=8 00:40:53.464 00:40:53.464 Run status group 0 (all jobs): 00:40:53.464 WRITE: bw=175MiB/s (183MB/s), 175MiB/s-175MiB/s (183MB/s-183MB/s), io=1749MiB (1834MB), run=10001-10001msec 00:40:53.464 TRIM: bw=175MiB/s (183MB/s), 175MiB/s-175MiB/s (183MB/s-183MB/s), io=1749MiB (1834MB), run=10001-10001msec 00:40:58.724 ----------------------------------------------------- 00:40:58.724 Suppressions used: 00:40:58.724 count bytes template 00:40:58.724 4 47 /usr/src/fio/parse.c 00:40:58.724 1 8 libtcmalloc_minimal.so 00:40:58.724 1 904 libcrypto.so 00:40:58.724 ----------------------------------------------------- 00:40:58.724 00:40:58.724 00:40:58.724 real 0m19.831s 00:40:58.724 user 1m0.548s 00:40:58.724 sys 0m0.826s 00:40:58.724 11:23:04 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:58.724 11:23:04 blockdev_crypto_aesni.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:40:58.724 ************************************ 00:40:58.724 END TEST bdev_fio_trim 00:40:58.724 ************************************ 00:40:58.724 11:23:04 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:40:58.724 11:23:04 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:40:58.725 11:23:04 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:40:58.725 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:40:58.725 11:23:04 blockdev_crypto_aesni.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:40:58.725 00:40:58.725 real 0m37.424s 00:40:58.725 user 1m57.823s 00:40:58.725 sys 0m1.918s 00:40:58.725 11:23:04 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:40:58.725 11:23:04 blockdev_crypto_aesni.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:40:58.725 ************************************ 00:40:58.725 END TEST bdev_fio 00:40:58.725 ************************************ 00:40:58.725 11:23:04 blockdev_crypto_aesni -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:40:58.725 11:23:04 blockdev_crypto_aesni -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:58.725 11:23:04 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:40:58.725 11:23:04 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:40:58.725 11:23:04 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:40:58.725 ************************************ 00:40:58.725 START TEST bdev_verify 00:40:58.725 ************************************ 00:40:58.725 11:23:04 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:40:58.725 [2024-07-25 11:23:05.049085] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:40:58.725 [2024-07-25 11:23:05.049211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3838769 ] 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:01.0 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:01.1 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:01.2 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:01.3 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:01.4 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:01.5 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:01.6 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:01.7 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:02.0 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:02.1 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:02.2 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:02.3 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:02.4 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:02.5 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:02.6 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3d:02.7 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:01.0 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:01.1 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:01.2 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:01.3 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:01.4 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:01.5 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:01.6 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:01.7 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:02.0 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:02.1 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:02.2 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:02.3 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:02.4 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:02.5 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:02.6 cannot be used 00:40:58.725 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:40:58.725 EAL: Requested device 0000:3f:02.7 cannot be used 00:40:58.725 [2024-07-25 11:23:05.275760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:58.725 [2024-07-25 11:23:05.559716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:40:58.725 [2024-07-25 11:23:05.559725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:40:58.725 [2024-07-25 11:23:05.581566] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:40:58.725 [2024-07-25 11:23:05.589595] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:40:58.725 [2024-07-25 11:23:05.597601] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:40:58.983 [2024-07-25 11:23:05.962599] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:41:02.267 [2024-07-25 11:23:08.825702] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:41:02.267 [2024-07-25 11:23:08.825781] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:02.267 [2024-07-25 11:23:08.825800] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:02.267 [2024-07-25 11:23:08.833703] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:41:02.267 [2024-07-25 11:23:08.833740] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:02.267 [2024-07-25 11:23:08.833759] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:02.267 [2024-07-25 11:23:08.841750] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:41:02.267 [2024-07-25 11:23:08.841788] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:41:02.267 [2024-07-25 11:23:08.841803] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:02.267 [2024-07-25 11:23:08.849744] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:41:02.267 [2024-07-25 11:23:08.849794] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:41:02.267 [2024-07-25 11:23:08.849810] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:02.267 Running I/O for 5 seconds... 00:41:07.591 00:41:07.591 Latency(us) 00:41:07.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:07.591 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:07.591 Verification LBA range: start 0x0 length 0x1000 00:41:07.591 crypto_ram : 5.07 454.71 1.78 0.00 0.00 280890.93 11481.91 182032.79 00:41:07.591 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:07.591 Verification LBA range: start 0x1000 length 0x1000 00:41:07.591 crypto_ram : 5.07 454.74 1.78 0.00 0.00 280853.72 12215.91 182871.65 00:41:07.591 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:07.591 Verification LBA range: start 0x0 length 0x1000 00:41:07.591 crypto_ram2 : 5.07 454.37 1.77 0.00 0.00 280095.36 12897.48 166933.30 00:41:07.591 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:07.591 Verification LBA range: start 0x1000 length 0x1000 00:41:07.591 crypto_ram2 : 5.07 454.49 1.78 0.00 0.00 280051.12 15099.49 167772.16 00:41:07.591 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:07.591 Verification LBA range: start 0x0 length 0x1000 00:41:07.591 crypto_ram3 : 5.06 3542.89 13.84 0.00 0.00 35801.14 4639.95 29779.56 00:41:07.591 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:07.591 Verification LBA range: start 0x1000 length 0x1000 00:41:07.591 crypto_ram3 : 5.05 3546.24 13.85 0.00 0.00 35755.38 7707.03 30198.99 00:41:07.591 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:41:07.591 Verification LBA range: start 0x0 length 0x1000 00:41:07.591 crypto_ram4 : 5.06 3543.56 13.84 0.00 0.00 35688.67 4561.31 29150.41 00:41:07.591 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:41:07.591 Verification LBA range: start 0x1000 length 0x1000 00:41:07.591 crypto_ram4 : 5.06 3554.59 13.89 0.00 0.00 35569.15 809.37 29360.13 00:41:07.591 =================================================================================================================== 00:41:07.591 Total : 16005.58 62.52 0.00 0.00 63571.36 809.37 182871.65 00:41:10.124 00:41:10.124 real 0m11.977s 00:41:10.124 user 0m22.060s 00:41:10.124 sys 0m0.571s 00:41:10.124 11:23:16 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:10.124 11:23:16 blockdev_crypto_aesni.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:41:10.124 ************************************ 00:41:10.124 END TEST bdev_verify 00:41:10.124 ************************************ 00:41:10.124 11:23:16 blockdev_crypto_aesni -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:41:10.124 11:23:16 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:41:10.124 11:23:16 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:10.124 11:23:16 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:10.124 ************************************ 00:41:10.124 START TEST bdev_verify_big_io 00:41:10.124 ************************************ 00:41:10.124 11:23:16 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:41:10.124 [2024-07-25 11:23:17.073822] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:10.124 [2024-07-25 11:23:17.073936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3840630 ] 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.124 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:10.124 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:10.125 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:10.125 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:10.383 [2024-07-25 11:23:17.297248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:10.642 [2024-07-25 11:23:17.579503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:10.642 [2024-07-25 11:23:17.579508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:10.642 [2024-07-25 11:23:17.601319] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:41:10.642 [2024-07-25 11:23:17.609343] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:41:10.642 [2024-07-25 11:23:17.617354] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:41:10.901 [2024-07-25 11:23:17.970686] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:41:14.190 [2024-07-25 11:23:20.832887] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:41:14.190 [2024-07-25 11:23:20.832968] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:14.190 [2024-07-25 11:23:20.832988] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:14.190 [2024-07-25 11:23:20.840906] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:41:14.190 [2024-07-25 11:23:20.840945] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:14.190 [2024-07-25 11:23:20.840961] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:14.190 [2024-07-25 11:23:20.848951] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:41:14.190 [2024-07-25 11:23:20.848985] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:41:14.190 [2024-07-25 11:23:20.849000] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:14.190 [2024-07-25 11:23:20.856945] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:41:14.190 [2024-07-25 11:23:20.856999] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:41:14.190 [2024-07-25 11:23:20.857015] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:14.190 Running I/O for 5 seconds... 00:41:16.791 [2024-07-25 11:23:23.858349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.858775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.859321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.860652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.862981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.864267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.865806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.867353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.868206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.868626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.870142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.871452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.874205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.875753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.877298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.878613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.879484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.880315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.881587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.883109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.885602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.887166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.888712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.889114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.889974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.891666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.893312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.895062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.897959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.899506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.900491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.900885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.902489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.903772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.905278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:16.791 [2024-07-25 11:23:23.906763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.909706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.911383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.911777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.912174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.914048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.915620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.917167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.918502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.921355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.922074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.922484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.922891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.924547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.926107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.927648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.928057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.930758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.931172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.931571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.932064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.934130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.935671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.937091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.937956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.939910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.940330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.940733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.942372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.944343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.946013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.946535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.947857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.949507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.949911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.950962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.952231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.954177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.954965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.956459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.957768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.959515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.959945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.961391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.962999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.964908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.965736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.967007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.968529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.970211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.971494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.972746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.974271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.975317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.977010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.978497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.980068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.982018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.983313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.984843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.986382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.987740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.989025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.990569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.992114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.995313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.996715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.998253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:23.999857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.001861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.003488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.005020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.006431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.009032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.010599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.012146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.012981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.014662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.016198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.017726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.018126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.021450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.023112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.023572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.024971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.027038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.027445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.027838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.052 [2024-07-25 11:23:24.028238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.031154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.032663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.032718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.033111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.033990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.035431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.035510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.036564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.037883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.038301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.038354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.038744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.039220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.040721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.040785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.041189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.042624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.043030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.043080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.043682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.044124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.045191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.045243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.046223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.047624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.048030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.048081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.049684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.050151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.050666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.050721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.051871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.053282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.054300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.054354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.055508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.055982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.057479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.057537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.058928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.062500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.062571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.064259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.064321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.066032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.066096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.067395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.067451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.069904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.069969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.070817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.070873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.072463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.072527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.072921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.072976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.076235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.076311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.076982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.077033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.078787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.078851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.079248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.079298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.081572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.081637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.083156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.083210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.084099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.084171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.084570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.084619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.086461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.086525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.087680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.087735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.088533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.088595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.088987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.089036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.053 [2024-07-25 11:23:24.091797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.091872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.093321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.093376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.094260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.094324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.094780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.094831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.097377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.097442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.098506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.098563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.099466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.099528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.100915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.100967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.103491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.103557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.103951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.104020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.104938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.105001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.106689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.106751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.109323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.109387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.109778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.109828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.110764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.110830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.111234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.111290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.112998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.113090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.113491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.113540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.113562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.113880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.114409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.114478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.114872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.114942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.114967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.115385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.116448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.116856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.116919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.117324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.117668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.117843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.118255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.118325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.118723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.119090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.120717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.120790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.120864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.120921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.121281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.121455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.121511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.121558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.121605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.121977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.123086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.123153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.123199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.123245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.123610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.123795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.123851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.123897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.123942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.124256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.125413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.125486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.125545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.125590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.125983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.126163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.126221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.126268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.126326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.126735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.127829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.054 [2024-07-25 11:23:24.127889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.127940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.127986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.128361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.128533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.128589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.128635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.128681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.129000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.130257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.130317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.130366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.130413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.130822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.130996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.131052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.131112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.131164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.131598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.132847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.132906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.132951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.132997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.133298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.133471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.133527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.133575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.133625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.133963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.135038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.135098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.135151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.135197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.135552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.135725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.135802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.135857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.135933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.136311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.137586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.137657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.137718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.137766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.138117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.138294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.138351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.138398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.138445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.138812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.139931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.139991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.140036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.140083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.140457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.140648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.140714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.140761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.140807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.141104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.142278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.142350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.142398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.142444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.142796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.142967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.143024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.143083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.143131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.143534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.144606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.144666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.144712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.144758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.145080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.145257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.145313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.145358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.145403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.145693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.146876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.146935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.146980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.147025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.147351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.147523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.147578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.147624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.147683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.055 [2024-07-25 11:23:24.147979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.304780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.306052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.307346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.308873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.312701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.314297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.315727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.317256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.319032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.320485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.322073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.323576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.326124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.327484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.329024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.329619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.331612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.333282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.334826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.335225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.337804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.339360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.340461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.341869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.343592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.345149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.345776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.346176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.349037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.350738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.351518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.352791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.354764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.356047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.356452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.356846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.359715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.360463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.362172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.363709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.365706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.366117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.366517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.366960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.369684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.370758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.372045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.373336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.374767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.375189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.375594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.376999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.378811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.380265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.381878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.383557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.384364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.384775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.385535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.386809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.389386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.390664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.390720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.392010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.392903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.393315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.393367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.393918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.395217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.396281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.396335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.397583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.398060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.398907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.315 [2024-07-25 11:23:24.398962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.399358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.400814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.401230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.401291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.402780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.403240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.403646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.403698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.404089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.405410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.406514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.406571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.407756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.408257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.408662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.408714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.409111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.410522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.412070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.412135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.413726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.414270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.414675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.414725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.415507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.416859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.418042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.418099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.418859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.419439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.419843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.419898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.421357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.422729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.424388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.424466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.424857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.425362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.426500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.426554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.427468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.428847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.429535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.429593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.429985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.430539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.431958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.316 [2024-07-25 11:23:24.432026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.433637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.434972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.435388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.435442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.435833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.436284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.437215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.437272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.437942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.439350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.439756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.439807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.440357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.440820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.442404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.442465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.443505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.444882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.445297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.445351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.446936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.447455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.448045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.448103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.449576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.451044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.451716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.451771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.452750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.453210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.454308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.454362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.455263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.456637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.458163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.458220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.459382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.459888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.461475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.461556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.463116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.464637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.465774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.465833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.467269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.467762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.468693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.468749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.469764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.471230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.472279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.472335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.472923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.473382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.474815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.474871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.475285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.476680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.478339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.478404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.478801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.479265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.481045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.481103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.481500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.482981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.483406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.483464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.483854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.484374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.484789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.484853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.485256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.486978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.487396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.487454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.487845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.488436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.488842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.488899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.489305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.578 [2024-07-25 11:23:24.490683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.491093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.491164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.491563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.492019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.492437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.492500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.492896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.494690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.495117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.495192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.495590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.496179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.496584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.496634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.497033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.498400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.498809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.498866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.499269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.499788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.500200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.500255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.500649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.502243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.502651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.502713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.503123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.503655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.504060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.504112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.504515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.505988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.506415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.506469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.506861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.506889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.507216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.507390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.507791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.507846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.508252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.508281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.508619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.510054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.510119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.510519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.510575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.510872] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.579 [2024-07-25 11:23:24.511409] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.579 [2024-07-25 11:23:24.511481] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.579 [2024-07-25 11:23:24.511881] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.579 [2024-07-25 11:23:24.511938] accel_dpdk_cryptodev.c: 476:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get dst_mbufs! 00:41:17.579 [2024-07-25 11:23:24.513486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.513546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.513591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.513639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.514026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.514210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.514266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.514331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.514376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.515800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.515858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.515903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.515948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.516303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.516480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.516541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.516602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.516652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.518009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.518069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.518114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.518169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.518464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.518640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.518709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.518756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.518803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.520122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.520188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.520235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.520280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.520647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.520816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.520871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.520916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.520963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.522296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.579 [2024-07-25 11:23:24.522360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.522412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.522461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.522752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.522924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.522983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.523030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.523075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.524522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.524581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.524632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.524681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.524973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.525152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.525209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.525255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.525300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.526645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.526703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.526763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.526810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.527159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.527332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.527387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.527433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.527489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.528892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.528959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.529004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.529051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.529372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.529542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.529597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.529643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.529695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.531237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.531646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.531707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.533204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.533505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.533680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.534369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.534430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.535846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.537262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.538869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.538923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.539319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.539632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.539807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.541541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.541606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.541999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.543373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.543780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.543831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.544652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.544950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.545119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.545869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.545921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.547270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.548904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.549324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.549386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.550931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.551238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.551408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.552027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.552081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.553490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.554830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.556485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.556540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.556930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.557259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.557429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.559112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.559179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.559574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.560946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.580 [2024-07-25 11:23:24.561457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.561511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.563115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.563552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.563724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.565037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.565093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.566570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.567960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.568928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.568983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.569686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.569986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.570160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.570568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.570619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.571746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.573109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.574692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.574756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.575553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.575856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.576027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.577208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.577262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.577745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.578990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.579889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.579946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.581350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.581694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.581863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.583358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.583411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.584019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.585397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.586901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.586956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.587530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.587831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.587998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.588983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.589035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.590771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.594839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.595267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.595330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.597034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.597341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.597511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.598580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.598633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.599209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.600516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.601810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.601864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.602285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.602583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.602748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.604062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.604119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.605680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.607210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.608877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.608931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.610287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.610643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.610813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.612365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.612423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.613264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.615581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.616977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.617032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.618559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.618891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.619061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.620595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.620653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.622129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.625868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.626302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.626355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.628048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.628355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.628524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.630203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.630266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.631729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.635795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.637436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.637490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.581 [2024-07-25 11:23:24.637972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.638279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.638449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.639049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.639102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.640359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.644265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.645582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.645638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.647174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.647556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.647725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.648600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.648653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.649884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.653329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.654157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.654211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.655800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.656097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.656271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.657908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.657963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.659263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.661943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.663249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.663304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.664632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.664931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.665100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.665911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.665966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.667239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.671455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.673052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.673105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.673502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.673799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.673969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.675432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.675488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.677026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.680508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.681219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.681273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.681661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.682045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.682230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.683518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.683572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.684866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.688863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.690386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.690454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.690848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.691190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.691359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.692226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.692281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.582 [2024-07-25 11:23:24.693534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.697449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.698783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.698837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.698890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.699202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.699371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.700786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.700840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.700893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.705347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.706838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.707008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.707175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.707693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.711586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.713228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.843 [2024-07-25 11:23:24.713624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.715248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.715699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.715761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.717205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.718469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.719747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.724909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.725835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.726771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.727720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.728029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.728969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.730251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.731565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.733103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.737499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.739014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.739418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.740882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.741287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.743019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.744755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.746347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.748032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.750863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.752112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.753863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.754265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.754571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.755081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.756826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.758399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.759803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.762508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.764051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.765397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.766802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.767202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.768763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.769183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.770521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.771791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.777547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.779115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.779970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.780973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.781303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.782353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.783123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.784400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.785689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.791321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.792167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.793731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.794126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.794434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.794947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.796659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.797904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.798507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.804177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.804584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.806218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.807519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.807891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.809589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.810988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.811555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.812872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.818094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.819741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.821071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.821128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.821529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.822795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.823564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.824637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.824690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.829395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.829467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.830581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.830633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.844 [2024-07-25 11:23:24.831022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.832516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.832579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.832979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.833028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.837291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.837357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.838856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.838911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.839299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.840401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.840478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.841650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.841708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.846373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.846443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.847398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.847465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.847773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.848292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.848355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.849618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.849681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.854613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.854688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.855688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.855740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.856134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.857537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.857600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.857998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.858047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.862403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.862468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.862859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.862907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.863283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.865001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.865070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.866656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.866717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.871153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.871219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.871928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.871978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.872312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.873613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.873677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.874667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.874717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.880557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.880622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.881016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.881075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.881388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.883002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.883071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.883476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.883532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.888360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.888424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.889243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.889298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.889635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.890784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.890847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.892160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.892216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.894983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.895049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.896672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.896725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.897035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.897612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.897675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.898833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.898917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.903988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.904054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.905250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.905305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.905664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.907293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.907363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.845 [2024-07-25 11:23:24.908833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.908891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.914111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.914188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.914589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.914643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.914945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.915860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.915923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.916731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.916782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.918866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.918931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.919944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.919995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.920308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.921426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.921489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.922070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.922122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.924551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.924618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.926107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.926176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.926551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.928057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.928120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.928862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.928915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.931464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.931529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.932819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.932869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.933192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.933704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.933773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.934182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.934237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.936705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.936771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.938398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.938458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.938887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.940602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.940663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.941057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.941116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.943763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.943827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.944230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.944299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.944682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.946101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.946175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.946573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.946637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.949541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.949606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.950957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.951009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.951435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.951949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.952013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.952417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.952473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.955450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.955539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.956352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.956406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.956749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.957964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.958027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.958518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:17.846 [2024-07-25 11:23:24.958569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.962522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.962585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.962979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.963040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.963415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.964023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.964085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.965200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.965250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.969774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.969843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.970263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.970315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.970615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.971128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.971205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.971601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.971660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.977519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.977585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.977979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.978033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.978351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.978864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.978928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.980453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.980504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.984975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.985042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.109 [2024-07-25 11:23:24.985446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.985497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.985849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.986818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.986884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.987954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.988010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.994132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.995485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.995982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.996036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.996345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.998103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.998962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:24.999960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.000012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.004167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.004232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.004278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.004323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.004647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.005626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.005690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.005736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.005780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.008021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.008092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.008137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.008192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.008512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.008687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.008757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.008804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.008856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.012882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.012940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.012986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.013031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.013474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.013651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.013717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.013767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.013822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.018432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.018493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.018538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.018584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.018884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.019055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.019110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.019163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.019209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.023157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.023221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.023267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.023318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.023616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.023787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.023852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.023898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.023948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.026147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.026211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.026256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.026300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.026605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.026774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.026828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.026880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.026925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.031070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.031129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.031181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.031230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.031658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.031828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.031888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.031934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.031979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.036265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.036325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.036370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.036418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.036716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.036886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.036942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.036987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.037032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.040496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.040556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.040602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.110 [2024-07-25 11:23:25.042125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.042503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.042677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.042731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.042784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.043549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.045958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.047395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.047453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.048611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.048993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.049174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.049978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.050039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.051606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.056702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.057154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.057211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.058412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.058727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.058897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.060045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.060099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.060616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.063476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.064984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.065041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.066199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.066568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.066736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.068067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.068119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.068832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.072099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.073105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.073167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.074481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.074793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.074962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.075374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.075428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.077064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.080911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.082198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.082255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.082772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.083075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.083249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.084050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.084103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.084961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.088093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.089351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.089409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.090858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.091309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.091480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.093212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.093269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.093658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.097056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.098360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.098416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.099749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.100054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.100229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.100637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.100696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.102383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.106201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.107909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.107963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.109323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.109741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.109908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.111599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.111660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.112052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.115943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.116542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.116600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.117643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.117956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.118121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.118867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.118920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.120053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.123799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.125448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.125511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.127213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.111 [2024-07-25 11:23:25.127520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.127687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.129046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.129100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.129502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.132527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.134093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.134158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.134891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.135260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.135425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.136744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.136799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.138319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.143710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.145152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.145212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.146586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.146892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.147061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.148167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.148221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.149821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.153049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.153948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.154003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.154771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.155079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.155253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.156539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.156593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.157878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.161532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.162909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.162962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.164403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.164830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.165011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.166675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.166728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.167117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.171707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.172999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.173055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.174337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.174646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.174811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.175434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.175489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.176566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.180450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.182016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.182070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.182971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.183286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.183453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.185050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.185108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.186632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.190147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.191568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.191621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.193046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.193407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.193577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.195323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.195376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.196355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.200161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.200569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.200623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.202003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.202380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.202548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.203838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.203893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.205177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.209094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.210650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.210705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.211451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.211762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.211928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.213034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.213088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.213661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.217774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.219384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.219442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.220952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.221266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.221437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.112 [2024-07-25 11:23:25.222685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.113 [2024-07-25 11:23:25.222738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.113 [2024-07-25 11:23:25.224264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.228314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.229654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.229709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.231359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.231731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.231896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.233158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.233213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.234440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.237165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.237578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.237634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.239049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.239359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.239530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.241091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.241151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.242237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.246438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.246504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.246549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.247322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.247630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.247798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.247857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.247909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.248310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.252634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.253926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.255212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.256748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.257088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.257265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.258900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.259301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.260923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.265642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.266934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.268378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.269982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.270305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.271630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.272235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.273489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.274091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.279731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.281193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.282488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.284008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.284399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.285749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.286400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.287610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.288063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.292901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.294430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.295538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.296292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.296605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.297434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.298053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.299301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.300811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.305718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.306333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.307080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.308252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.308562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.309507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.310701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.311877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.374 [2024-07-25 11:23:25.313334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.317956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.318924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.320104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.321246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.321560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.322072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.323608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.324010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.324966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.330429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.330861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.332305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.332697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.333009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.334511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.334937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.336486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.337852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.344044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.345552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.345954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.347427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.347753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.348478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.349687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.350283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.351542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.356497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.357145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.358333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.358386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.358696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.359739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.360541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.361548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.361605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.365721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.365784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.367323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.367384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.367841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.369636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.369704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.371080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.371135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.375090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.375161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.376115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.376170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.376481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.377509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.377573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.378692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.378747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.384072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.384150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.384542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.384600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.384909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.386486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.386562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.387175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.387230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.392217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.392281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.393306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.393360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.393750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.394916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.394980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.396508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.396562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.399432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.399498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.400918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.400978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.401287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.402084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.402153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.403061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.403115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.407478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.407541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.408448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.408503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.408842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.410435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.410504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.411665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.411721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.417927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.375 [2024-07-25 11:23:25.418003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.418407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.418468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.418776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.419301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.419365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.420551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.420600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.425761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.425826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.426979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.427031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.427424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.429109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.429177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.429728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.429780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.433376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.433447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.433844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.433899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.434213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.435863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.435926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.437135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.437192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.441084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.441156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.441961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.442011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.442329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.442995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.443057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.443460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.443516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.447371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.447442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.447837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.447918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.448265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.449249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.449315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.450528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.450580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.453402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.453467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.454505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.454556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.454883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.455399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.455469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.455865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.455920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.458525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.458592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.458985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.459048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.459396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.459908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.459972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.460374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.460434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.463004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.463071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.463476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.463536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.463905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.464421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.464496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.464890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.464959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.467935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.468002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.468406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.468462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.468864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.469385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.469454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.469850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.469921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.472551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.472617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.473025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.473080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.473462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.473975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.474040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.474443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.474500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.479100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.479171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.376 [2024-07-25 11:23:25.480231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.480286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.480742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.481265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.481329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.482831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.482886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.486491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.486563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.486953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.487006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.487321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.488865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.488930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.489547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.377 [2024-07-25 11:23:25.489602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.494357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.494425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.495976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.496032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.496352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.497391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.497454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.498476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.498532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.502333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.502398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.503227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.503282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.503645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.504165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.504240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.504633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.504681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.509116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.509186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.509578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.509626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.509946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.511637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.511708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.513276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.513328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.518275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.518340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.518737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.518787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.519130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.520443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.520507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.521019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.521073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.526014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.526431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.528000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.528081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.528396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.529099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.530335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.531809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.531863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.535630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.535697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.535743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.535787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.536129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.536852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.536915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.536961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.537007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.540546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.540606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.540665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.540711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.541045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.541225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.541288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.541337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.541384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.542933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.542995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.543046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.543092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.543404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.543577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.543633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.543679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.543725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.545235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.545294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.545339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.545388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.638 [2024-07-25 11:23:25.545800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.545970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.546025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.546102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.546157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.547591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.547651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.547696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.547747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.548078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.548256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.548315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.548368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.548419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.549938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.549997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.550042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.550094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.550403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.550573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.550629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.550676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.550721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.552210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.552270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.552315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.552363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.552738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.552905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.552960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.553005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.553059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.554561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.554620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.554665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.554710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.555038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.555220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.555275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.555321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.555394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.556812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.556871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.556916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.558371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.558720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.558892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.558954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.559004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.559975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.561430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.561837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.561889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.562289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.562622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.562797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.563210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.563269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.564668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.566284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.567267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.567328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.568766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.569211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.569383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.569785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.569835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.570868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.572311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.573810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.573873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.574269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.574613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.574778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.575920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.575974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.577230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.578694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.579987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.580041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.581329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.581664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.581831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.582694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.582748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.583137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.584596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.585916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.585971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.587033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.587348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.639 [2024-07-25 11:23:25.587516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.589142] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.589214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.590757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.592256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.593910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.593973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.595681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.595990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.596166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.597865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.597918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.599172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.600662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.601530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.601585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.602366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.602686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.602851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.603297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.603349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.604870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.606379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.607488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.607544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.608817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.609156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.609321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.610639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.610694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.611591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.612938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.614135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.614194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.615457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.615828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.615993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.617316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.617370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.618121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.622523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.622935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.622986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.624617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.624932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.625104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.626749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.626810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.628535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.632454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.632860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.632912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.633310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.633618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.633783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.635324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.635381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.636840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.638207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.639505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.639560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.640879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.641254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.641454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.641857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.641909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.643066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.644505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.645261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.645317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.646592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.646925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.647090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.648404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.648459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.649413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.651165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.652683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.652739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.654191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.654501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.640 [2024-07-25 11:23:25.654669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.656169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.656226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.657723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.659161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.659568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.659619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.660639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.660980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.661151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.662450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.662510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.663817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.665199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.666520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.666574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.667771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.668232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.668405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.668805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.668856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.670461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.671817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.672922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.672988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.674249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.674587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.674752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.676073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.676127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.676558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.677926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.679237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.679292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.680575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.680886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.681050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.682634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.682704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.684233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.685574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.685981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.686036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.687399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.687730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.687893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.689229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.689284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.690654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.692002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.693308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.693364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.694409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.694857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.695022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.695430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.695486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.696896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.698269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.699884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.699962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.700359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.700714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.700881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.702122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.702180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.703064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.704465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.705027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.705082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.705493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.705876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.706045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.707290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.707348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.708897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.710240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.710305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.710351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.711991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.712436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.641 [2024-07-25 11:23:25.712604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.712665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.712711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.713713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.715025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.715986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.716902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.718330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.718726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.718897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.719308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.720378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.721279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.726291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.726707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.727100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.728024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.728371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.729829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.730869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.731817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.733114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.736453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.737695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.738805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.739744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.740057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.740582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.741001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.742086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.743026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.745410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.745815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.746215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.747386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.747748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.748975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.750267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.751180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.642 [2024-07-25 11:23:25.752250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.903 [2024-07-25 11:23:25.754709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.903 [2024-07-25 11:23:25.755761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.903 [2024-07-25 11:23:25.757064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.903 [2024-07-25 11:23:25.758004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.903 [2024-07-25 11:23:25.758341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.903 [2024-07-25 11:23:25.758853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.903 [2024-07-25 11:23:25.759262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.903 [2024-07-25 11:23:25.760567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.903 [2024-07-25 11:23:25.761516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.903 [2024-07-25 11:23:25.763741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.764154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.764548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.765928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.766298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.767293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.768755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.769703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.770599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.773376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.774137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.775727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.776727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.777076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.777846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.778788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.779764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.781134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.782876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.783306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.784388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.785160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.785513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.786525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.786928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.787327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.788239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.790413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.792061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.792465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.792522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.792929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.793909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.795199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.796505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.796561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.799268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.799333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.799728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.799776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.800146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.800664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.800734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.802491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.802551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.805528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.805594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.805985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.806035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.806459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.807739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.807803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.808227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.808283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.810081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.810153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.810548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.810602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.810971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.811491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.811555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.811947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.812011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.813828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.813896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.814307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.814368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.814733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.815255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.815318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.815709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.815757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.817611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.817675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.818069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.818124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.818498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.819010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.904 [2024-07-25 11:23:25.819074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.819473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.819523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.822157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.822224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.822618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.822679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.823068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.823585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.823649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.824046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.824106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.825939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.826009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.826435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.826492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.826865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.827393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.827455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.827849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.827905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.829940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.830006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.830415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.830471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.830829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.831349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.831412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.831814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.831863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.835723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.835793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.836195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.836252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.836560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.837072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.837135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.837534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.837582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.840580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.840652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.842029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.842085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.842452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.842963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.843024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.843726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.843780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.846028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.846094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.847053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.847109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.847564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.848074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.848134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.849754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.849806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.852841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.852915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.853315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.853365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.853747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.854887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.854951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.855871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.855926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.857992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.858059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.858457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.858507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.858857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.860652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.860721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.862354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.862416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.864148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.864213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.864605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.864660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.865004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.866286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.866351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.867118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.867184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.868961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.869027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.869431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.905 [2024-07-25 11:23:25.869488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.869794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.871498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.871562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.872361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.872416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.874240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.874305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.875670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.875727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.876046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.876605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.876682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.878181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.878244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.880190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.880255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.881425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.881481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.881829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.883157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.883221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.884462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.884517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.887598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.887702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.889325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.889376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.889709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.891027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.891091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.892191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.892242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.895034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.895102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.895625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.895679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.895989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.897723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.897794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.898194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.898245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.900859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.900924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.902032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.902087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.902423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.903268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.903332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.903725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.903774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.905449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.905516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.906684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.906740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.907047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.907571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.907634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.908029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.908077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.910958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.911029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.912524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.912581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.912944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.913467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.913529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.914493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.914545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.917391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.918488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.920017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.920073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.920442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.920962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.921377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.923117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.923183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.924956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.925020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.925065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.925111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.925478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.926901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.906 [2024-07-25 11:23:25.926965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.927011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.927056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.928425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.928489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.928535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.928580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.928885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.929057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.929113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.929167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.929219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.930595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.930655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.930702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.930747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.931174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.931350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.931405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.931452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.931497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.932891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.932950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.932996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.933042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.933470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.933643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.933698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.933745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.933790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.935303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.935362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.935407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.935460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.935761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.935925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.935989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.936035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.936089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.937406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.937467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.937511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.937561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.937866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.938033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.938088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.938134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.938187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.939569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.939629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.939675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.939721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.940097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.940270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.940328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.940373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.940426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.941718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.941776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.941822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.941871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.942284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.942455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.942511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.942557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.942602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.943943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.944003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.944056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.944474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.944819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.944988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.945044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.945090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.946595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.947985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.948856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.948910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.950191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.950532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.950703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.952269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.952324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.952742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.954187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.955499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.955553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.957070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.957456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.957625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.907 [2024-07-25 11:23:25.959250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.959312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.960879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.962236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.962640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.962691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.964145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.964488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.964652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.966124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.966184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.967885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.969223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.970778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.970832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.971401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.971796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.971960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.972562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.972616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.973880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.975210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.976970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.977030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.978740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.979049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.979223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.980641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.980694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.981082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.982529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.983895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.983951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.985496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.985923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.986091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.987358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.987412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.988711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.990416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.990824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.990877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.992174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.992485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.992649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.994210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.994265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.995230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.996621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.998294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.998348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.998738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.999097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:25.999269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.000550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.000605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.001912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.003321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.004622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.004677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.005972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.006292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.006462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.007154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.007208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.007599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.008997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.010550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.010604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.011669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.908 [2024-07-25 11:23:26.011980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.909 [2024-07-25 11:23:26.012157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.909 [2024-07-25 11:23:26.013776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.909 [2024-07-25 11:23:26.013836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.909 [2024-07-25 11:23:26.015492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.909 [2024-07-25 11:23:26.016884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.909 [2024-07-25 11:23:26.018025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.909 [2024-07-25 11:23:26.018079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.909 [2024-07-25 11:23:26.019313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.909 [2024-07-25 11:23:26.019648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:18.909 [2024-07-25 11:23:26.019811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.168 [2024-07-25 11:23:26.021332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.168 [2024-07-25 11:23:26.021387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.022050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.023362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.024374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.024441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.024831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.025279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.025447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.026983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.027039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.028549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.029891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.031170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.031225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.032509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.032818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.032983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.033400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.033452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.033842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.035155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.036372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.036426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.037325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.037657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.037846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.038257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.038310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.038699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.040110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.041315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.041375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.042848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.043244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.043414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.043825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.043879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.045217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.046588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.047809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.047867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.048278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.048671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.048838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.049412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.049466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.050545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.051930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.053162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.053221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.053612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.054036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.054214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.055777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.055838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.057052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.058426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.060122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.060182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.061557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.061943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.062110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.062519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.062571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.063651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.065006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.066186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.066243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.066716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.067098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.067273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.067678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.067743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.069170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.070548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.071901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.071954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.072355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.072776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.072942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.074127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.074188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.075335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.076789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.077209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.077264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.077654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.077988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.078166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.079478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.079535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.169 [2024-07-25 11:23:26.080895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.082211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.082627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.082678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.083068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.083416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.083586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.084842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.084898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.085302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.086689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.086761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.086808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.087209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.087587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.087755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.087814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.087859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.089089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.090524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.090940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.091342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.091737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.092049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.092237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.093864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.094386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.095707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.097562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.098816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.100298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.100864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.101187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.102757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.103166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.103563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.104214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.106708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.107977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.108379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.108772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.109115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.110278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.110933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.112135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.112536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.115482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.116322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.117351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.119010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.119396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.119907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.120411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.121759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.123285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.126102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.127158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.127561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.127954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.128367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.129786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.130200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.131663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.132057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.134690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.136259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.137197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.138629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.138970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.140391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.141950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.142454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.142854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.144564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.146149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.147803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.148225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.148573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.149089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.149504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.149905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.150321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.152200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.152603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.152999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.153420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.153832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.154353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.154754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.155163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.155564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.157300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.170 [2024-07-25 11:23:26.157710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.158111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.158171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.158497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.159012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.159428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.159832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.159885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.161811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.161878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.162276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.162331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.162741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.163264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.163329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.163720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.163770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.165690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.165757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.166155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.166205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.166571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.167083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.167159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.167552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.167603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.169464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.169529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.169919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.169968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.170316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.172111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.172179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.172577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.172634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.174808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.174872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.176039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.176094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.176480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.177989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.178052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.179445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.179500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.182411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.182487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.184004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.184055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.184405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.185698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.185762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.186748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.186813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.189692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.189757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.190215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.190271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.190579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.192342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.192424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.192816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.192866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.195460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.195525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.196679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.196735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.197068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.197854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.197919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.198322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.198378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.200915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.200984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.201411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.201467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.201834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.202507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.202571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.203609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.203665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.206208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.206274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.206666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.206715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.207176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.208852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.208921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.210194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.210249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.212005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.212070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.212469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.171 [2024-07-25 11:23:26.212519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.212829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.213861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.213925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.215028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.215083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.216811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.216887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.217286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.217336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.217642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.219146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.219209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.219739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.219792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.221579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.221644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.222503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.222557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.222919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.224288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.224352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.225749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.225801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.227654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.227719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.229473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.229528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.229867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.230419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.230483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.231711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.231767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.233722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.233788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.234760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.234816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.235121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.236363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.236426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.237339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.237399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.240249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.240314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.241431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.241486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.241848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.243507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.243578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.245215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.245293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.247936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.248011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.249556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.249611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.249923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.250965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.251031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.252027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.252082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.254773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.254839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.256019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.256075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.256456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.257931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.257995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.259282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.259337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.261489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.261552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.262467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.262527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.262851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.264417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.264482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.265656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.265710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.268547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.268613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.270149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.270202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.270565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.272130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.272202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.273864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.273917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.276262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.276334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.277225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.277279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.277654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.172 [2024-07-25 11:23:26.278172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.173 [2024-07-25 11:23:26.278238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.173 [2024-07-25 11:23:26.278634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.173 [2024-07-25 11:23:26.278689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.173 [2024-07-25 11:23:26.281540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.173 [2024-07-25 11:23:26.281611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.173 [2024-07-25 11:23:26.282008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.173 [2024-07-25 11:23:26.282062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.173 [2024-07-25 11:23:26.282384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.173 [2024-07-25 11:23:26.283272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.173 [2024-07-25 11:23:26.283342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.173 [2024-07-25 11:23:26.284126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.173 [2024-07-25 11:23:26.284186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.286386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.286452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.287762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.287816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.288178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.289495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.289559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.290903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.290954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.293560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.294840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.296180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.296249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.296563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.298386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.299970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.301398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.301452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.303638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.303704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.303749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.303795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.304124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.305535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.305598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.305653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.305699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.307057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.307121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.307195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.307242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.307549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.307723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.307780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.307827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.307872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.309773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.309835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.309880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.309925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.310263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.310438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.310493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.310539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.310584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.311930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.311990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.312036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.312081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.312417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.312590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.312653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.312705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.435 [2024-07-25 11:23:26.312750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.314070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.314152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.314214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.314266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.314640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.314816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.314871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.314916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.314967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.316350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.316408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.316453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.316513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.316818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.316982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.317037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.317089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.317156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.318435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.318495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.318540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.318599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.319025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.319207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.319264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.319309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.319354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.320631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.320689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.320734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.320778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.321114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.321289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.321352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.321400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.321450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.322816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.322883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.322932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.324613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.324970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.325148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.325205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.325250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.325651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.327280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.328813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.328868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.330325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.330683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.330855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.332402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.332464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.334066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.335476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.336761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.336815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.338107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.338459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.338626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.339528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.339583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.340983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.342269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.342680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.342736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.343136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.343455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.343620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.344960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.345017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.346426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.347787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.349095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.349156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.350445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.350762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.350931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.351341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.351404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.351793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.353085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.354758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.354812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.356030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.356404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.356570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.357878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.357933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.359231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.360699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.436 [2024-07-25 11:23:26.361981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.362037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.363297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.363653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.363823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.364710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.364767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.366185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.367475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.367887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.367946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.368347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.368672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.368838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.370178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.370233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.371591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.372945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.374231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.374286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.375572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.375883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.376063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.376481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.376543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.376934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.378361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.380104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.380171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.381547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.381903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.382073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.382489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.382546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.383109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.384537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.385491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.385548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.386490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.386907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.387074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.387490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.387548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.389210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.390630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.392332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.392386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.392776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.393191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.393358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.394421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.394480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.395405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.396887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.397435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.397492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.397887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.398257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.398435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.399926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.399984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.401417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.402861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.403282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.403340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.403738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.404053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.404227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.405344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.405401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.405888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.407395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.407801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.407852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.408683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.409017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.409194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.410419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.410477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.411823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.413446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.413854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.413910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.415407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.415737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.415908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.416710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.416768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.417679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.419084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.420338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.437 [2024-07-25 11:23:26.420394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.421302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.421653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.421823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.423532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.423586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.425016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.426708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.428006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.428064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.429681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.430022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.430198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.431118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.431179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.432315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.433762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.434748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.434806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.435462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.435772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.435939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.437414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.437471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.437866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.439361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.440981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.441064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.441974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.442313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.442484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.443642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.443698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.444091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.445621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.446319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.446386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.447888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.448225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.448399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.448805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.448861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.449264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.450648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.451689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.451745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.452655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.453035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.453210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.453634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.453689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.454084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.455454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.457177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.457230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.458296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.458637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.458808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.459222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.459278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.459670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.461118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.461191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.461243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.461645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.461988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.462167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.462232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.462279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.463409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.464780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.466534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.468100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.469495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.469857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.470029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.470443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.470840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.471950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.474631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.475212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.475611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.476005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.476324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.478112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.479672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.481055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.481740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.484375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.484783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.438 [2024-07-25 11:23:26.485191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.485589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.485903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.487651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.488058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.489405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.490926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.494109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.494536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.496271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.496672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.497125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.497642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.498049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.498457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.498862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.500770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.501190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.501602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.502003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.502354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.502862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.503275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.503678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.504078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.505924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.506361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.506763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.507171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.507577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.508089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.508503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.508903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.509308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.511178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.511585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.511998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.512403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.512788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.513307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.513713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.514114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.514517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.516517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.516925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.517333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.517732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.518049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.519723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.520133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.521732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.523313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.526375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.528077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.528490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.528547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.528868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.530432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.530837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.531247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.531304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.533591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.533657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.534841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.534896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.535261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.535777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.535846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.536246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.536314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.539115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.539190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.540585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.540636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.541025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.541546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.541610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.542543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.542599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.544430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.544498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.439 [2024-07-25 11:23:26.544896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.440 [2024-07-25 11:23:26.544948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.440 [2024-07-25 11:23:26.545265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.440 [2024-07-25 11:23:26.546495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.440 [2024-07-25 11:23:26.546558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.440 [2024-07-25 11:23:26.546987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.440 [2024-07-25 11:23:26.547059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.440 [2024-07-25 11:23:26.548805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.440 [2024-07-25 11:23:26.548870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.440 [2024-07-25 11:23:26.549512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.440 [2024-07-25 11:23:26.549564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.440 [2024-07-25 11:23:26.549907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.551432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.551497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.552636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.552689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.554499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.554565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.556012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.556063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.556419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.557221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.557286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.558771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.558834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.560781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.560847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.561856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.561911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.562226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.563418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.563481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.564387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.564442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.567605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.567678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.568930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.568985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.569407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.570935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.571005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.572709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.572769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.575348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.575415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.576854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.576907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.577222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.578260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.578327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.579204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.579272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.581734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.581799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.582420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.582478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.582784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.584407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.584476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.584876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.584930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.587739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.587805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.699 [2024-07-25 11:23:26.588969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.589021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.589416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.590468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.590546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.590947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.591011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.592978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.593045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.594330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.594395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.594703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.595221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.595284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.595677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.595731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.597978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.598045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.599654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.599707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.600013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.601793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.601862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.602263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.602319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.604869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.604936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.606027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.606084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.606425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.607171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.607240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.607630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.607689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.610644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.610716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.611115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.611176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.611529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.612035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.612098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.613378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.613431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.616032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.616097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.617374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.617427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.617825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.618347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.618412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.618809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.618866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.620916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.620982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.622194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.622250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.622591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.624265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.624336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.624735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.624789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.626520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.626587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.626984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.627034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.627427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.627941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.628011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.628417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.628474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.630322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.630413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.631944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.631999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.632309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.632819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.632883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.633287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.633347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.636278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.636343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.636922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.636973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.637320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.638814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.638877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.640398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.640451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.644027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.644098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.645786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.700 [2024-07-25 11:23:26.645839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.646159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.647685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.647746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.649111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.649168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.650925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.650992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.651393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.651449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.651760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.653144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.653207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.654487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.654538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.655987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:19.701 [2024-07-25 11:23:26.656072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:41:20.267 00:41:20.267 Latency(us) 00:41:20.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:20.267 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:20.267 Verification LBA range: start 0x0 length 0x100 00:41:20.267 crypto_ram : 5.84 43.86 2.74 0.00 0.00 2834731.83 72142.03 2738041.65 00:41:20.267 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:20.267 Verification LBA range: start 0x100 length 0x100 00:41:20.267 crypto_ram : 5.80 44.13 2.76 0.00 0.00 2812030.16 78852.92 2630667.47 00:41:20.267 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:20.267 Verification LBA range: start 0x0 length 0x100 00:41:20.267 crypto_ram2 : 5.84 43.85 2.74 0.00 0.00 2741356.13 71722.60 2738041.65 00:41:20.267 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:20.267 Verification LBA range: start 0x100 length 0x100 00:41:20.267 crypto_ram2 : 5.80 44.12 2.76 0.00 0.00 2717637.02 78433.48 2630667.47 00:41:20.267 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:20.267 Verification LBA range: start 0x0 length 0x100 00:41:20.267 crypto_ram3 : 5.59 272.29 17.02 0.00 0.00 421771.42 67108.86 570425.34 00:41:20.267 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:20.267 Verification LBA range: start 0x100 length 0x100 00:41:20.267 crypto_ram3 : 5.57 283.49 17.72 0.00 0.00 404682.82 7235.17 570425.34 00:41:20.267 Job: crypto_ram4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:41:20.267 Verification LBA range: start 0x0 length 0x100 00:41:20.267 crypto_ram4 : 5.67 287.30 17.96 0.00 0.00 388722.89 16672.36 526804.58 00:41:20.267 Job: crypto_ram4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:41:20.267 Verification LBA range: start 0x100 length 0x100 00:41:20.267 crypto_ram4 : 5.66 298.42 18.65 0.00 0.00 373814.72 12111.05 523449.14 00:41:20.267 =================================================================================================================== 00:41:20.267 Total : 1317.45 82.34 0.00 0.00 723985.40 7235.17 2738041.65 00:41:22.799 00:41:22.799 real 0m12.922s 00:41:22.799 user 0m23.949s 00:41:22.799 sys 0m0.589s 00:41:22.799 11:23:29 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:22.799 11:23:29 blockdev_crypto_aesni.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:41:22.799 ************************************ 00:41:22.799 END TEST bdev_verify_big_io 00:41:22.799 ************************************ 00:41:23.057 11:23:29 blockdev_crypto_aesni -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:23.057 11:23:29 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:41:23.057 11:23:29 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:23.057 11:23:29 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:23.057 ************************************ 00:41:23.057 START TEST bdev_write_zeroes 00:41:23.057 ************************************ 00:41:23.057 11:23:29 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:23.057 [2024-07-25 11:23:30.079680] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:23.058 [2024-07-25 11:23:30.079796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3842741 ] 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:23.317 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:23.317 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:23.317 [2024-07-25 11:23:30.305851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:23.576 [2024-07-25 11:23:30.584213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:23.576 [2024-07-25 11:23:30.605979] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:41:23.576 [2024-07-25 11:23:30.614005] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:41:23.576 [2024-07-25 11:23:30.622012] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:41:24.142 [2024-07-25 11:23:31.012773] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:41:27.428 [2024-07-25 11:23:33.907170] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_1" 00:41:27.428 [2024-07-25 11:23:33.907249] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:27.428 [2024-07-25 11:23:33.907268] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:27.428 [2024-07-25 11:23:33.915185] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_2" 00:41:27.428 [2024-07-25 11:23:33.915224] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:27.428 [2024-07-25 11:23:33.915240] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:27.428 [2024-07-25 11:23:33.923225] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_3" 00:41:27.428 [2024-07-25 11:23:33.923257] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:41:27.428 [2024-07-25 11:23:33.923272] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:27.428 [2024-07-25 11:23:33.931217] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_aesni_cbc_4" 00:41:27.428 [2024-07-25 11:23:33.931248] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:41:27.428 [2024-07-25 11:23:33.931263] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:27.428 Running I/O for 1 seconds... 00:41:28.365 00:41:28.365 Latency(us) 00:41:28.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:28.365 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:28.365 crypto_ram : 1.03 1859.46 7.26 0.00 0.00 68216.89 6343.88 82627.79 00:41:28.365 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:28.365 crypto_ram2 : 1.03 1872.76 7.32 0.00 0.00 67386.67 6107.96 76755.76 00:41:28.365 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:28.365 crypto_ram3 : 1.02 14276.56 55.77 0.00 0.00 8811.18 2660.76 11586.76 00:41:28.365 Job: crypto_ram4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:41:28.365 crypto_ram4 : 1.02 14314.25 55.92 0.00 0.00 8758.60 2634.55 9227.47 00:41:28.365 =================================================================================================================== 00:41:28.365 Total : 32323.02 126.26 0.00 0.00 15633.54 2634.55 82627.79 00:41:30.901 00:41:30.901 real 0m7.816s 00:41:30.901 user 0m7.225s 00:41:30.901 sys 0m0.527s 00:41:30.901 11:23:37 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:30.901 11:23:37 blockdev_crypto_aesni.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:41:30.901 ************************************ 00:41:30.901 END TEST bdev_write_zeroes 00:41:30.901 ************************************ 00:41:30.901 11:23:37 blockdev_crypto_aesni -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:30.901 11:23:37 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:41:30.901 11:23:37 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:30.901 11:23:37 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:30.901 ************************************ 00:41:30.901 START TEST bdev_json_nonenclosed 00:41:30.901 ************************************ 00:41:30.901 11:23:37 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:30.901 [2024-07-25 11:23:37.977531] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:30.901 [2024-07-25 11:23:37.977647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844038 ] 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:31.160 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:31.160 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:31.160 [2024-07-25 11:23:38.200575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:31.419 [2024-07-25 11:23:38.476968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:31.419 [2024-07-25 11:23:38.477068] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:41:31.419 [2024-07-25 11:23:38.477097] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:31.419 [2024-07-25 11:23:38.477113] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:31.987 00:41:31.987 real 0m1.177s 00:41:31.987 user 0m0.907s 00:41:31.987 sys 0m0.264s 00:41:31.987 11:23:39 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:31.987 11:23:39 blockdev_crypto_aesni.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:41:31.987 ************************************ 00:41:31.987 END TEST bdev_json_nonenclosed 00:41:31.987 ************************************ 00:41:31.987 11:23:39 blockdev_crypto_aesni -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:31.987 11:23:39 blockdev_crypto_aesni -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:41:31.987 11:23:39 blockdev_crypto_aesni -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:31.987 11:23:39 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:32.246 ************************************ 00:41:32.246 START TEST bdev_json_nonarray 00:41:32.247 ************************************ 00:41:32.247 11:23:39 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:41:32.247 [2024-07-25 11:23:39.222865] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:32.247 [2024-07-25 11:23:39.222981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844152 ] 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:32.247 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:32.247 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:32.505 [2024-07-25 11:23:39.446265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.764 [2024-07-25 11:23:39.727488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:32.764 [2024-07-25 11:23:39.727590] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:41:32.764 [2024-07-25 11:23:39.727618] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:41:32.764 [2024-07-25 11:23:39.727634] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:41:33.338 00:41:33.338 real 0m1.174s 00:41:33.338 user 0m0.899s 00:41:33.338 sys 0m0.268s 00:41:33.338 11:23:40 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:33.338 11:23:40 blockdev_crypto_aesni.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:41:33.338 ************************************ 00:41:33.338 END TEST bdev_json_nonarray 00:41:33.338 ************************************ 00:41:33.338 11:23:40 blockdev_crypto_aesni -- bdev/blockdev.sh@786 -- # [[ crypto_aesni == bdev ]] 00:41:33.338 11:23:40 blockdev_crypto_aesni -- bdev/blockdev.sh@793 -- # [[ crypto_aesni == gpt ]] 00:41:33.338 11:23:40 blockdev_crypto_aesni -- bdev/blockdev.sh@797 -- # [[ crypto_aesni == crypto_sw ]] 00:41:33.338 11:23:40 blockdev_crypto_aesni -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:41:33.338 11:23:40 blockdev_crypto_aesni -- bdev/blockdev.sh@810 -- # cleanup 00:41:33.338 11:23:40 blockdev_crypto_aesni -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:41:33.338 11:23:40 blockdev_crypto_aesni -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:41:33.338 11:23:40 blockdev_crypto_aesni -- bdev/blockdev.sh@26 -- # [[ crypto_aesni == rbd ]] 00:41:33.338 11:23:40 blockdev_crypto_aesni -- bdev/blockdev.sh@30 -- # [[ crypto_aesni == daos ]] 00:41:33.338 11:23:40 blockdev_crypto_aesni -- bdev/blockdev.sh@34 -- # [[ crypto_aesni = \g\p\t ]] 00:41:33.338 11:23:40 blockdev_crypto_aesni -- bdev/blockdev.sh@40 -- # [[ crypto_aesni == xnvme ]] 00:41:33.338 00:41:33.338 real 1m51.025s 00:41:33.338 user 3m46.212s 00:41:33.338 sys 0m11.002s 00:41:33.338 11:23:40 blockdev_crypto_aesni -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:33.338 11:23:40 blockdev_crypto_aesni -- common/autotest_common.sh@10 -- # set +x 00:41:33.338 ************************************ 00:41:33.338 END TEST blockdev_crypto_aesni 00:41:33.338 ************************************ 00:41:33.338 11:23:40 -- spdk/autotest.sh@362 -- # run_test blockdev_crypto_sw /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:41:33.338 11:23:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:33.338 11:23:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:33.338 11:23:40 -- common/autotest_common.sh@10 -- # set +x 00:41:33.338 ************************************ 00:41:33.338 START TEST blockdev_crypto_sw 00:41:33.338 ************************************ 00:41:33.338 11:23:40 blockdev_crypto_sw -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_sw 00:41:33.597 * Looking for test storage... 00:41:33.597 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/nbd_common.sh@6 -- # set -e 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@20 -- # : 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # uname -s 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@681 -- # test_type=crypto_sw 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@682 -- # crypto_device= 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@683 -- # dek= 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@684 -- # env_ctx= 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == bdev ]] 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@689 -- # [[ crypto_sw == crypto_* ]] 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3844426 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:41:33.597 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@49 -- # waitforlisten 3844426 00:41:33.597 11:23:40 blockdev_crypto_sw -- common/autotest_common.sh@831 -- # '[' -z 3844426 ']' 00:41:33.597 11:23:40 blockdev_crypto_sw -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:33.597 11:23:40 blockdev_crypto_sw -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:33.597 11:23:40 blockdev_crypto_sw -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:33.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:33.598 11:23:40 blockdev_crypto_sw -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:33.598 11:23:40 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:33.598 11:23:40 blockdev_crypto_sw -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:41:33.598 [2024-07-25 11:23:40.653841] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:33.598 [2024-07-25 11:23:40.653962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844426 ] 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:33.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.856 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:33.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.857 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:33.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.857 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:33.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.857 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:33.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.857 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:33.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.857 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:33.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.857 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:33.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.857 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:33.857 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:33.857 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:33.857 [2024-07-25 11:23:40.880406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:34.118 [2024-07-25 11:23:41.154019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:34.378 11:23:41 blockdev_crypto_sw -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:34.378 11:23:41 blockdev_crypto_sw -- common/autotest_common.sh@864 -- # return 0 00:41:34.378 11:23:41 blockdev_crypto_sw -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:41:34.378 11:23:41 blockdev_crypto_sw -- bdev/blockdev.sh@710 -- # setup_crypto_sw_conf 00:41:34.378 11:23:41 blockdev_crypto_sw -- bdev/blockdev.sh@192 -- # rpc_cmd 00:41:34.378 11:23:41 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:34.378 11:23:41 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:35.754 Malloc0 00:41:35.754 Malloc1 00:41:35.754 true 00:41:35.754 true 00:41:35.754 true 00:41:35.754 [2024-07-25 11:23:42.703903] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:41:35.754 crypto_ram 00:41:35.754 [2024-07-25 11:23:42.711917] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:41:35.754 crypto_ram2 00:41:35.754 [2024-07-25 11:23:42.719966] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:41:35.754 crypto_ram3 00:41:35.754 [ 00:41:35.754 { 00:41:35.754 "name": "Malloc1", 00:41:35.754 "aliases": [ 00:41:35.754 "aa8392fd-77b7-4749-9c40-325b526132a1" 00:41:35.754 ], 00:41:35.754 "product_name": "Malloc disk", 00:41:35.754 "block_size": 4096, 00:41:35.754 "num_blocks": 4096, 00:41:35.754 "uuid": "aa8392fd-77b7-4749-9c40-325b526132a1", 00:41:35.754 "assigned_rate_limits": { 00:41:35.754 "rw_ios_per_sec": 0, 00:41:35.754 "rw_mbytes_per_sec": 0, 00:41:35.754 "r_mbytes_per_sec": 0, 00:41:35.754 "w_mbytes_per_sec": 0 00:41:35.754 }, 00:41:35.754 "claimed": true, 00:41:35.754 "claim_type": "exclusive_write", 00:41:35.754 "zoned": false, 00:41:35.754 "supported_io_types": { 00:41:35.754 "read": true, 00:41:35.754 "write": true, 00:41:35.754 "unmap": true, 00:41:35.754 "flush": true, 00:41:35.754 "reset": true, 00:41:35.754 "nvme_admin": false, 00:41:35.754 "nvme_io": false, 00:41:35.754 "nvme_io_md": false, 00:41:35.754 "write_zeroes": true, 00:41:35.754 "zcopy": true, 00:41:35.754 "get_zone_info": false, 00:41:35.754 "zone_management": false, 00:41:35.754 "zone_append": false, 00:41:35.754 "compare": false, 00:41:35.754 "compare_and_write": false, 00:41:35.754 "abort": true, 00:41:35.754 "seek_hole": false, 00:41:35.754 "seek_data": false, 00:41:35.754 "copy": true, 00:41:35.754 "nvme_iov_md": false 00:41:35.754 }, 00:41:35.754 "memory_domains": [ 00:41:35.754 { 00:41:35.754 "dma_device_id": "system", 00:41:35.754 "dma_device_type": 1 00:41:35.754 }, 00:41:35.754 { 00:41:35.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:41:35.754 "dma_device_type": 2 00:41:35.754 } 00:41:35.754 ], 00:41:35.754 "driver_specific": {} 00:41:35.754 } 00:41:35.754 ] 00:41:35.754 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.754 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:41:35.754 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.754 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:35.754 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.754 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # cat 00:41:35.754 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:41:35.754 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.754 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:35.754 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.754 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:41:35.754 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.754 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:35.754 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.754 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:41:35.754 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.754 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:35.755 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:35.755 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:41:35.755 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:41:35.755 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:41:35.755 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@561 -- # xtrace_disable 00:41:35.755 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:36.014 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:41:36.014 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:41:36.014 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # jq -r .name 00:41:36.014 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "48547acd-0f6f-548c-a052-dba94c5699d6"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "48547acd-0f6f-548c-a052-dba94c5699d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "ed3fe9be-55de-5ec9-8efe-4964b09e794e"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "ed3fe9be-55de-5ec9-8efe-4964b09e794e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:41:36.014 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:41:36.014 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:41:36.014 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:41:36.014 11:23:42 blockdev_crypto_sw -- bdev/blockdev.sh@753 -- # killprocess 3844426 00:41:36.014 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@950 -- # '[' -z 3844426 ']' 00:41:36.014 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@954 -- # kill -0 3844426 00:41:36.014 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # uname 00:41:36.014 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:36.014 11:23:42 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3844426 00:41:36.014 11:23:43 blockdev_crypto_sw -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:36.014 11:23:43 blockdev_crypto_sw -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:36.014 11:23:43 blockdev_crypto_sw -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3844426' 00:41:36.014 killing process with pid 3844426 00:41:36.014 11:23:43 blockdev_crypto_sw -- common/autotest_common.sh@969 -- # kill 3844426 00:41:36.014 11:23:43 blockdev_crypto_sw -- common/autotest_common.sh@974 -- # wait 3844426 00:41:39.328 11:23:46 blockdev_crypto_sw -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:41:39.328 11:23:46 blockdev_crypto_sw -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:41:39.328 11:23:46 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:41:39.328 11:23:46 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:39.328 11:23:46 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:39.328 ************************************ 00:41:39.328 START TEST bdev_hello_world 00:41:39.328 ************************************ 00:41:39.328 11:23:46 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:41:39.588 [2024-07-25 11:23:46.529212] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:39.588 [2024-07-25 11:23:46.529322] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3845486 ] 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:39.588 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.588 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:39.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.589 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:39.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.589 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:39.589 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:39.589 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:39.848 [2024-07-25 11:23:46.754957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:40.107 [2024-07-25 11:23:47.030972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:40.676 [2024-07-25 11:23:47.581349] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:41:40.676 [2024-07-25 11:23:47.581420] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:40.676 [2024-07-25 11:23:47.581440] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:40.676 [2024-07-25 11:23:47.589359] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:41:40.676 [2024-07-25 11:23:47.589399] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:40.676 [2024-07-25 11:23:47.589415] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:40.676 [2024-07-25 11:23:47.597379] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:41:40.676 [2024-07-25 11:23:47.597415] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:41:40.676 [2024-07-25 11:23:47.597432] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:40.676 [2024-07-25 11:23:47.686185] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:41:40.676 [2024-07-25 11:23:47.686222] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:41:40.676 [2024-07-25 11:23:47.686249] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:41:40.676 [2024-07-25 11:23:47.688503] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:41:40.676 [2024-07-25 11:23:47.688605] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:41:40.676 [2024-07-25 11:23:47.688627] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:41:40.676 [2024-07-25 11:23:47.688671] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:41:40.676 00:41:40.676 [2024-07-25 11:23:47.688695] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:41:42.582 00:41:42.582 real 0m3.053s 00:41:42.582 user 0m2.651s 00:41:42.582 sys 0m0.380s 00:41:42.582 11:23:49 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:42.582 11:23:49 blockdev_crypto_sw.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:41:42.582 ************************************ 00:41:42.582 END TEST bdev_hello_world 00:41:42.582 ************************************ 00:41:42.582 11:23:49 blockdev_crypto_sw -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:41:42.582 11:23:49 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:42.582 11:23:49 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:42.582 11:23:49 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:42.582 ************************************ 00:41:42.582 START TEST bdev_bounds 00:41:42.582 ************************************ 00:41:42.582 11:23:49 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:41:42.582 11:23:49 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=3846027 00:41:42.582 11:23:49 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:41:42.582 11:23:49 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 3846027' 00:41:42.582 Process bdevio pid: 3846027 00:41:42.582 11:23:49 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 3846027 00:41:42.583 11:23:49 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 3846027 ']' 00:41:42.583 11:23:49 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:42.583 11:23:49 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:42.583 11:23:49 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:42.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:42.583 11:23:49 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:42.583 11:23:49 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:41:42.583 11:23:49 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:41:42.583 [2024-07-25 11:23:49.660069] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:42.583 [2024-07-25 11:23:49.660196] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3846027 ] 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:42.842 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:42.842 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:42.842 [2024-07-25 11:23:49.882409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:43.101 [2024-07-25 11:23:50.168982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:41:43.101 [2024-07-25 11:23:50.169047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:43.101 [2024-07-25 11:23:50.169048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:41:43.669 [2024-07-25 11:23:50.712529] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:41:43.669 [2024-07-25 11:23:50.712607] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:43.669 [2024-07-25 11:23:50.712626] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:43.669 [2024-07-25 11:23:50.720537] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:41:43.669 [2024-07-25 11:23:50.720576] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:43.669 [2024-07-25 11:23:50.720592] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:43.669 [2024-07-25 11:23:50.728567] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:41:43.669 [2024-07-25 11:23:50.728606] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:41:43.669 [2024-07-25 11:23:50.728621] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:43.929 11:23:50 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:43.929 11:23:50 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:41:43.929 11:23:50 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:41:43.929 I/O targets: 00:41:43.929 crypto_ram: 32768 blocks of 512 bytes (16 MiB) 00:41:43.929 crypto_ram3: 4096 blocks of 4096 bytes (16 MiB) 00:41:43.929 00:41:43.929 00:41:43.929 CUnit - A unit testing framework for C - Version 2.1-3 00:41:43.929 http://cunit.sourceforge.net/ 00:41:43.929 00:41:43.929 00:41:43.929 Suite: bdevio tests on: crypto_ram3 00:41:43.929 Test: blockdev write read block ...passed 00:41:43.929 Test: blockdev write zeroes read block ...passed 00:41:43.929 Test: blockdev write zeroes read no split ...passed 00:41:43.929 Test: blockdev write zeroes read split ...passed 00:41:43.929 Test: blockdev write zeroes read split partial ...passed 00:41:43.929 Test: blockdev reset ...passed 00:41:43.929 Test: blockdev write read 8 blocks ...passed 00:41:43.929 Test: blockdev write read size > 128k ...passed 00:41:43.929 Test: blockdev write read invalid size ...passed 00:41:43.929 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:43.929 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:43.929 Test: blockdev write read max offset ...passed 00:41:43.929 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:43.929 Test: blockdev writev readv 8 blocks ...passed 00:41:43.929 Test: blockdev writev readv 30 x 1block ...passed 00:41:43.929 Test: blockdev writev readv block ...passed 00:41:43.929 Test: blockdev writev readv size > 128k ...passed 00:41:43.929 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:43.929 Test: blockdev comparev and writev ...passed 00:41:43.929 Test: blockdev nvme passthru rw ...passed 00:41:43.929 Test: blockdev nvme passthru vendor specific ...passed 00:41:43.929 Test: blockdev nvme admin passthru ...passed 00:41:43.929 Test: blockdev copy ...passed 00:41:43.929 Suite: bdevio tests on: crypto_ram 00:41:43.929 Test: blockdev write read block ...passed 00:41:43.929 Test: blockdev write zeroes read block ...passed 00:41:44.188 Test: blockdev write zeroes read no split ...passed 00:41:44.188 Test: blockdev write zeroes read split ...passed 00:41:44.188 Test: blockdev write zeroes read split partial ...passed 00:41:44.188 Test: blockdev reset ...passed 00:41:44.188 Test: blockdev write read 8 blocks ...passed 00:41:44.188 Test: blockdev write read size > 128k ...passed 00:41:44.188 Test: blockdev write read invalid size ...passed 00:41:44.188 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:44.188 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:44.188 Test: blockdev write read max offset ...passed 00:41:44.188 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:44.188 Test: blockdev writev readv 8 blocks ...passed 00:41:44.188 Test: blockdev writev readv 30 x 1block ...passed 00:41:44.188 Test: blockdev writev readv block ...passed 00:41:44.188 Test: blockdev writev readv size > 128k ...passed 00:41:44.188 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:44.188 Test: blockdev comparev and writev ...passed 00:41:44.188 Test: blockdev nvme passthru rw ...passed 00:41:44.188 Test: blockdev nvme passthru vendor specific ...passed 00:41:44.188 Test: blockdev nvme admin passthru ...passed 00:41:44.188 Test: blockdev copy ...passed 00:41:44.188 00:41:44.188 Run Summary: Type Total Ran Passed Failed Inactive 00:41:44.188 suites 2 2 n/a 0 0 00:41:44.188 tests 46 46 46 0 0 00:41:44.188 asserts 260 260 260 0 n/a 00:41:44.188 00:41:44.188 Elapsed time = 0.553 seconds 00:41:44.188 0 00:41:44.188 11:23:51 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 3846027 00:41:44.188 11:23:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 3846027 ']' 00:41:44.188 11:23:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 3846027 00:41:44.188 11:23:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:41:44.188 11:23:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:44.188 11:23:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3846027 00:41:44.188 11:23:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:44.188 11:23:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:44.188 11:23:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3846027' 00:41:44.188 killing process with pid 3846027 00:41:44.188 11:23:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@969 -- # kill 3846027 00:41:44.188 11:23:51 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@974 -- # wait 3846027 00:41:46.093 11:23:52 blockdev_crypto_sw.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:41:46.093 00:41:46.093 real 0m3.423s 00:41:46.093 user 0m7.947s 00:41:46.093 sys 0m0.528s 00:41:46.093 11:23:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:46.093 11:23:52 blockdev_crypto_sw.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:41:46.093 ************************************ 00:41:46.093 END TEST bdev_bounds 00:41:46.093 ************************************ 00:41:46.093 11:23:53 blockdev_crypto_sw -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:41:46.093 11:23:53 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:41:46.093 11:23:53 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:46.093 11:23:53 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:46.093 ************************************ 00:41:46.093 START TEST bdev_nbd 00:41:46.093 ************************************ 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram3' '' 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram3') 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=3846587 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 3846587 /var/tmp/spdk-nbd.sock 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 3846587 ']' 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:41:46.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:41:46.093 11:23:53 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:41:46.093 [2024-07-25 11:23:53.176024] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:41:46.093 [2024-07-25 11:23:53.176137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:46.367 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:46.367 EAL: Requested device 0000:3f:02.7 cannot be used 00:41:46.367 [2024-07-25 11:23:53.403649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:46.627 [2024-07-25 11:23:53.682398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:41:47.193 [2024-07-25 11:23:54.265189] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:41:47.193 [2024-07-25 11:23:54.265270] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:41:47.193 [2024-07-25 11:23:54.265290] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:47.193 [2024-07-25 11:23:54.273212] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:41:47.193 [2024-07-25 11:23:54.273251] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:41:47.193 [2024-07-25 11:23:54.273267] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:47.193 [2024-07-25 11:23:54.281249] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:41:47.193 [2024-07-25 11:23:54.281283] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:41:47.193 [2024-07-25 11:23:54.281300] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:41:47.452 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:47.712 1+0 records in 00:41:47.712 1+0 records out 00:41:47.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241912 s, 16.9 MB/s 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:41:47.712 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:47.972 1+0 records in 00:41:47.972 1+0 records out 00:41:47.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337428 s, 12.1 MB/s 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:41:47.972 11:23:54 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:48.231 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:41:48.231 { 00:41:48.231 "nbd_device": "/dev/nbd0", 00:41:48.231 "bdev_name": "crypto_ram" 00:41:48.231 }, 00:41:48.231 { 00:41:48.231 "nbd_device": "/dev/nbd1", 00:41:48.231 "bdev_name": "crypto_ram3" 00:41:48.231 } 00:41:48.231 ]' 00:41:48.231 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:41:48.231 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:41:48.231 { 00:41:48.231 "nbd_device": "/dev/nbd0", 00:41:48.231 "bdev_name": "crypto_ram" 00:41:48.231 }, 00:41:48.231 { 00:41:48.231 "nbd_device": "/dev/nbd1", 00:41:48.231 "bdev_name": "crypto_ram3" 00:41:48.231 } 00:41:48.231 ]' 00:41:48.231 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:41:48.231 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:41:48.231 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:48.231 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:48.231 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:48.231 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:41:48.231 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:48.231 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:48.490 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:48.490 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:48.490 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:48.490 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:48.490 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:48.490 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:48.490 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:48.490 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:48.490 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:48.490 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:48.749 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram3' '/dev/nbd0 /dev/nbd1' 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram3') 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:49.008 11:23:55 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:41:49.267 /dev/nbd0 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:49.267 1+0 records in 00:41:49.267 1+0 records out 00:41:49.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298963 s, 13.7 MB/s 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:49.267 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd1 00:41:49.525 /dev/nbd1 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:41:49.526 1+0 records in 00:41:49.526 1+0 records out 00:41:49.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326815 s, 12.5 MB/s 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:49.526 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:41:49.785 { 00:41:49.785 "nbd_device": "/dev/nbd0", 00:41:49.785 "bdev_name": "crypto_ram" 00:41:49.785 }, 00:41:49.785 { 00:41:49.785 "nbd_device": "/dev/nbd1", 00:41:49.785 "bdev_name": "crypto_ram3" 00:41:49.785 } 00:41:49.785 ]' 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:41:49.785 { 00:41:49.785 "nbd_device": "/dev/nbd0", 00:41:49.785 "bdev_name": "crypto_ram" 00:41:49.785 }, 00:41:49.785 { 00:41:49.785 "nbd_device": "/dev/nbd1", 00:41:49.785 "bdev_name": "crypto_ram3" 00:41:49.785 } 00:41:49.785 ]' 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:41:49.785 /dev/nbd1' 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:41:49.785 /dev/nbd1' 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:41:49.785 256+0 records in 00:41:49.785 256+0 records out 00:41:49.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114135 s, 91.9 MB/s 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:41:49.785 256+0 records in 00:41:49.785 256+0 records out 00:41:49.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215202 s, 48.7 MB/s 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:41:49.785 256+0 records in 00:41:49.785 256+0 records out 00:41:49.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0371303 s, 28.2 MB/s 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:49.785 11:23:56 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:50.044 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:50.044 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:50.044 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:50.044 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:50.044 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:50.044 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:50.044 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:50.044 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:50.044 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:50.044 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:41:50.303 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:41:50.303 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:41:50.303 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:41:50.303 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:50.303 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:50.303 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:41:50.303 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:50.303 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:50.303 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:41:50.303 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:50.303 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:41:50.562 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:41:50.821 malloc_lvol_verify 00:41:50.821 11:23:57 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:41:51.080 04837e7b-ef98-433f-89b5-7ceb85a8668e 00:41:51.080 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:41:51.339 123507c2-a913-4591-b6bb-e6794ed9a254 00:41:51.339 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:41:51.598 /dev/nbd0 00:41:51.598 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:41:51.598 mke2fs 1.46.5 (30-Dec-2021) 00:41:51.598 Discarding device blocks: 0/4096 done 00:41:51.598 Creating filesystem with 4096 1k blocks and 1024 inodes 00:41:51.598 00:41:51.598 Allocating group tables: 0/1 done 00:41:51.598 Writing inode tables: 0/1 done 00:41:51.598 Creating journal (1024 blocks): done 00:41:51.598 Writing superblocks and filesystem accounting information: 0/1 done 00:41:51.598 00:41:51.598 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:41:51.598 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:41:51.598 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:41:51.598 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:41:51.598 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:41:51.598 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:41:51.598 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:41:51.598 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 3846587 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 3846587 ']' 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 3846587 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3846587 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3846587' 00:41:51.857 killing process with pid 3846587 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@969 -- # kill 3846587 00:41:51.857 11:23:58 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@974 -- # wait 3846587 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:41:53.831 00:41:53.831 real 0m7.642s 00:41:53.831 user 0m9.865s 00:41:53.831 sys 0m2.353s 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:41:53.831 ************************************ 00:41:53.831 END TEST bdev_nbd 00:41:53.831 ************************************ 00:41:53.831 11:24:00 blockdev_crypto_sw -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:41:53.831 11:24:00 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = nvme ']' 00:41:53.831 11:24:00 blockdev_crypto_sw -- bdev/blockdev.sh@763 -- # '[' crypto_sw = gpt ']' 00:41:53.831 11:24:00 blockdev_crypto_sw -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:41:53.831 11:24:00 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:41:53.831 11:24:00 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:53.831 11:24:00 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:41:53.831 ************************************ 00:41:53.831 START TEST bdev_fio 00:41:53.831 ************************************ 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:41:53.831 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:41:53.831 ************************************ 00:41:53.831 START TEST bdev_fio_rw_verify 00:41:53.831 ************************************ 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:41:53.831 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:54.090 11:24:00 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:41:54.349 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:54.349 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:41:54.349 fio-3.35 00:41:54.349 Starting 2 threads 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:01.0 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:01.1 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:01.2 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:01.3 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:01.4 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:01.5 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:01.6 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:01.7 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:02.0 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:02.1 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:02.2 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:02.3 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:02.4 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:02.5 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:02.6 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3d:02.7 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:01.0 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:01.1 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:01.2 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:01.3 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:01.4 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:01.5 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:01.6 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:01.7 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:02.0 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:02.1 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:02.2 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:02.3 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:02.4 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:02.5 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:02.6 cannot be used 00:41:54.609 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:41:54.609 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:06.823 00:42:06.823 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=3848494: Thu Jul 25 11:24:12 2024 00:42:06.823 read: IOPS=22.1k, BW=86.3MiB/s (90.5MB/s)(863MiB/10000msec) 00:42:06.823 slat (usec): min=14, max=289, avg=20.20, stdev= 3.81 00:42:06.823 clat (usec): min=7, max=575, avg=145.34, stdev=58.39 00:42:06.823 lat (usec): min=25, max=596, avg=165.55, stdev=59.80 00:42:06.823 clat percentiles (usec): 00:42:06.823 | 50.000th=[ 143], 99.000th=[ 277], 99.900th=[ 302], 99.990th=[ 420], 00:42:06.823 | 99.999th=[ 537] 00:42:06.823 write: IOPS=26.5k, BW=104MiB/s (109MB/s)(983MiB/9490msec); 0 zone resets 00:42:06.823 slat (usec): min=14, max=331, avg=33.61, stdev= 5.22 00:42:06.823 clat (usec): min=25, max=713, avg=193.50, stdev=89.09 00:42:06.823 lat (usec): min=52, max=749, avg=227.11, stdev=90.72 00:42:06.823 clat percentiles (usec): 00:42:06.823 | 50.000th=[ 188], 99.000th=[ 388], 99.900th=[ 412], 99.990th=[ 490], 00:42:06.823 | 99.999th=[ 644] 00:42:06.823 bw ( KiB/s): min=94312, max=107576, per=95.04%, avg=100812.58, stdev=1965.96, samples=38 00:42:06.823 iops : min=23578, max=26894, avg=25203.11, stdev=491.48, samples=38 00:42:06.823 lat (usec) : 10=0.01%, 20=0.01%, 50=5.00%, 100=14.99%, 250=63.28% 00:42:06.823 lat (usec) : 500=16.71%, 750=0.01% 00:42:06.823 cpu : usr=99.19%, sys=0.34%, ctx=83, majf=0, minf=19330 00:42:06.823 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.823 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.823 issued rwts: total=220904,251656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.823 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:06.823 00:42:06.823 Run status group 0 (all jobs): 00:42:06.823 READ: bw=86.3MiB/s (90.5MB/s), 86.3MiB/s-86.3MiB/s (90.5MB/s-90.5MB/s), io=863MiB (905MB), run=10000-10000msec 00:42:06.823 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=983MiB (1031MB), run=9490-9490msec 00:42:06.823 ----------------------------------------------------- 00:42:06.823 Suppressions used: 00:42:06.823 count bytes template 00:42:06.823 2 23 /usr/src/fio/parse.c 00:42:06.823 1025 98400 /usr/src/fio/iolog.c 00:42:06.823 1 8 libtcmalloc_minimal.so 00:42:06.823 1 904 libcrypto.so 00:42:06.823 ----------------------------------------------------- 00:42:06.823 00:42:06.823 00:42:06.823 real 0m12.992s 00:42:06.823 user 0m33.883s 00:42:06.823 sys 0m0.730s 00:42:06.823 11:24:13 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:06.823 11:24:13 blockdev_crypto_sw.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:42:06.823 ************************************ 00:42:06.823 END TEST bdev_fio_rw_verify 00:42:06.823 ************************************ 00:42:06.823 11:24:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "48547acd-0f6f-548c-a052-dba94c5699d6"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "48547acd-0f6f-548c-a052-dba94c5699d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "ed3fe9be-55de-5ec9-8efe-4964b09e794e"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "ed3fe9be-55de-5ec9-8efe-4964b09e794e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:42:07.083 11:24:13 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:42:07.083 11:24:14 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:42:07.083 crypto_ram3 ]] 00:42:07.083 11:24:14 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "48547acd-0f6f-548c-a052-dba94c5699d6"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "48547acd-0f6f-548c-a052-dba94c5699d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_sw"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "ed3fe9be-55de-5ec9-8efe-4964b09e794e"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 4096,' ' "uuid": "ed3fe9be-55de-5ec9-8efe-4964b09e794e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "crypto_ram2",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_sw3"' ' }' ' }' '}' 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:42:07.084 ************************************ 00:42:07.084 START TEST bdev_fio_trim 00:42:07.084 ************************************ 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:07.084 11:24:14 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:42:07.674 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:42:07.674 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:42:07.674 fio-3.35 00:42:07.674 Starting 2 threads 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:07.674 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:07.674 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:19.859 00:42:19.859 job_crypto_ram: (groupid=0, jobs=2): err= 0: pid=3851126: Thu Jul 25 11:24:25 2024 00:42:19.859 write: IOPS=45.1k, BW=176MiB/s (185MB/s)(1760MiB/10001msec); 0 zone resets 00:42:19.859 slat (usec): min=12, max=112, avg=19.55, stdev= 2.62 00:42:19.859 clat (usec): min=40, max=398, avg=144.85, stdev=59.96 00:42:19.859 lat (usec): min=57, max=417, avg=164.39, stdev=60.00 00:42:19.859 clat percentiles (usec): 00:42:19.859 | 50.000th=[ 147], 99.000th=[ 249], 99.900th=[ 265], 99.990th=[ 285], 00:42:19.859 | 99.999th=[ 355] 00:42:19.859 bw ( KiB/s): min=178928, max=181040, per=100.00%, avg=180377.68, stdev=244.10, samples=38 00:42:19.859 iops : min=44732, max=45260, avg=45094.42, stdev=61.03, samples=38 00:42:19.859 trim: IOPS=45.1k, BW=176MiB/s (185MB/s)(1760MiB/10001msec); 0 zone resets 00:42:19.860 slat (usec): min=5, max=112, avg= 9.16, stdev= 1.78 00:42:19.860 clat (usec): min=31, max=269, avg=96.11, stdev=35.70 00:42:19.860 lat (usec): min=37, max=279, avg=105.28, stdev=35.94 00:42:19.860 clat percentiles (usec): 00:42:19.860 | 50.000th=[ 91], 99.000th=[ 176], 99.900th=[ 188], 99.990th=[ 200], 00:42:19.860 | 99.999th=[ 253] 00:42:19.860 bw ( KiB/s): min=178952, max=181048, per=100.00%, avg=180378.95, stdev=242.30, samples=38 00:42:19.860 iops : min=44738, max=45262, avg=45094.74, stdev=60.58, samples=38 00:42:19.860 lat (usec) : 50=7.33%, 100=37.10%, 250=55.09%, 500=0.48% 00:42:19.860 cpu : usr=99.59%, sys=0.04%, ctx=30, majf=0, minf=2108 00:42:19.860 IO depths : 1=6.5%, 2=16.0%, 4=62.0%, 8=15.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.860 complete : 0=0.0%, 4=86.6%, 8=13.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.860 issued rwts: total=0,450681,450681,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.860 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:19.860 00:42:19.860 Run status group 0 (all jobs): 00:42:19.860 WRITE: bw=176MiB/s (185MB/s), 176MiB/s-176MiB/s (185MB/s-185MB/s), io=1760MiB (1846MB), run=10001-10001msec 00:42:19.860 TRIM: bw=176MiB/s (185MB/s), 176MiB/s-176MiB/s (185MB/s-185MB/s), io=1760MiB (1846MB), run=10001-10001msec 00:42:20.425 ----------------------------------------------------- 00:42:20.425 Suppressions used: 00:42:20.425 count bytes template 00:42:20.425 2 23 /usr/src/fio/parse.c 00:42:20.425 1 8 libtcmalloc_minimal.so 00:42:20.425 1 904 libcrypto.so 00:42:20.425 ----------------------------------------------------- 00:42:20.425 00:42:20.425 00:42:20.425 real 0m13.368s 00:42:20.425 user 0m34.199s 00:42:20.425 sys 0m0.660s 00:42:20.425 11:24:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:20.425 11:24:27 blockdev_crypto_sw.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:42:20.425 ************************************ 00:42:20.425 END TEST bdev_fio_trim 00:42:20.425 ************************************ 00:42:20.425 11:24:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:42:20.425 11:24:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:42:20.425 11:24:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:42:20.425 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:42:20.425 11:24:27 blockdev_crypto_sw.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:42:20.425 00:42:20.425 real 0m26.725s 00:42:20.425 user 1m8.252s 00:42:20.425 sys 0m1.606s 00:42:20.425 11:24:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:20.425 11:24:27 blockdev_crypto_sw.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:42:20.425 ************************************ 00:42:20.425 END TEST bdev_fio 00:42:20.425 ************************************ 00:42:20.684 11:24:27 blockdev_crypto_sw -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:42:20.684 11:24:27 blockdev_crypto_sw -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:20.684 11:24:27 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:42:20.684 11:24:27 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:20.684 11:24:27 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:20.684 ************************************ 00:42:20.684 START TEST bdev_verify 00:42:20.684 ************************************ 00:42:20.684 11:24:27 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:42:20.684 [2024-07-25 11:24:27.704105] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:20.684 [2024-07-25 11:24:27.704227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853081 ] 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:20.942 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:20.942 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:20.942 [2024-07-25 11:24:27.930629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:21.200 [2024-07-25 11:24:28.206074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:21.200 [2024-07-25 11:24:28.206082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:21.765 [2024-07-25 11:24:28.781155] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:42:21.766 [2024-07-25 11:24:28.781229] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:42:21.766 [2024-07-25 11:24:28.781256] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:21.766 [2024-07-25 11:24:28.789163] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:42:21.766 [2024-07-25 11:24:28.789203] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:42:21.766 [2024-07-25 11:24:28.789220] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:21.766 [2024-07-25 11:24:28.797189] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:42:21.766 [2024-07-25 11:24:28.797227] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:42:21.766 [2024-07-25 11:24:28.797242] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:22.023 Running I/O for 5 seconds... 00:42:27.284 00:42:27.284 Latency(us) 00:42:27.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:27.284 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:27.284 Verification LBA range: start 0x0 length 0x800 00:42:27.284 crypto_ram : 5.02 6070.64 23.71 0.00 0.00 20997.90 2110.26 27472.69 00:42:27.284 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:42:27.284 Verification LBA range: start 0x800 length 0x800 00:42:27.284 crypto_ram : 5.02 6096.55 23.81 0.00 0.00 20909.43 2215.12 27262.98 00:42:27.284 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:42:27.284 Verification LBA range: start 0x0 length 0x800 00:42:27.284 crypto_ram3 : 5.03 3050.98 11.92 0.00 0.00 41719.05 2031.62 32715.57 00:42:27.284 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:42:27.284 Verification LBA range: start 0x800 length 0x800 00:42:27.284 crypto_ram3 : 5.03 3055.62 11.94 0.00 0.00 41652.13 1939.87 32925.29 00:42:27.284 =================================================================================================================== 00:42:27.284 Total : 18273.78 71.38 0.00 0.00 27893.16 1939.87 32925.29 00:42:29.218 00:42:29.218 real 0m8.197s 00:42:29.218 user 0m14.705s 00:42:29.218 sys 0m0.403s 00:42:29.218 11:24:35 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:29.218 11:24:35 blockdev_crypto_sw.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:42:29.218 ************************************ 00:42:29.218 END TEST bdev_verify 00:42:29.218 ************************************ 00:42:29.218 11:24:35 blockdev_crypto_sw -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:29.218 11:24:35 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:42:29.218 11:24:35 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:29.218 11:24:35 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:29.218 ************************************ 00:42:29.218 START TEST bdev_verify_big_io 00:42:29.218 ************************************ 00:42:29.218 11:24:35 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:42:29.218 [2024-07-25 11:24:35.974201] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:29.218 [2024-07-25 11:24:35.974317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3854411 ] 00:42:29.218 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.218 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:29.219 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:29.219 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:29.219 [2024-07-25 11:24:36.172465] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:42:29.478 [2024-07-25 11:24:36.465941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:29.478 [2024-07-25 11:24:36.465947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:30.044 [2024-07-25 11:24:37.058305] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:42:30.044 [2024-07-25 11:24:37.058380] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:42:30.044 [2024-07-25 11:24:37.058400] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:30.044 [2024-07-25 11:24:37.066321] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:42:30.044 [2024-07-25 11:24:37.066361] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:42:30.044 [2024-07-25 11:24:37.066379] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:30.044 [2024-07-25 11:24:37.074350] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:42:30.044 [2024-07-25 11:24:37.074387] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:42:30.044 [2024-07-25 11:24:37.074403] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:30.301 Running I/O for 5 seconds... 00:42:35.560 00:42:35.560 Latency(us) 00:42:35.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:35.560 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:35.560 Verification LBA range: start 0x0 length 0x80 00:42:35.560 crypto_ram : 5.14 472.92 29.56 0.00 0.00 264275.62 6868.17 369098.75 00:42:35.560 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:35.560 Verification LBA range: start 0x80 length 0x80 00:42:35.560 crypto_ram : 5.09 477.79 29.86 0.00 0.00 261772.19 6815.74 365743.31 00:42:35.560 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:42:35.560 Verification LBA range: start 0x0 length 0x80 00:42:35.560 crypto_ram3 : 5.27 267.30 16.71 0.00 0.00 450291.52 5845.81 375809.64 00:42:35.560 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:42:35.560 Verification LBA range: start 0x80 length 0x80 00:42:35.560 crypto_ram3 : 5.23 269.04 16.82 0.00 0.00 447804.65 5819.60 372454.20 00:42:35.560 =================================================================================================================== 00:42:35.560 Total : 1487.05 92.94 0.00 0.00 331232.77 5819.60 375809.64 00:42:37.458 00:42:37.458 real 0m8.510s 00:42:37.458 user 0m15.395s 00:42:37.458 sys 0m0.368s 00:42:37.458 11:24:44 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:37.458 11:24:44 blockdev_crypto_sw.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:42:37.458 ************************************ 00:42:37.458 END TEST bdev_verify_big_io 00:42:37.458 ************************************ 00:42:37.458 11:24:44 blockdev_crypto_sw -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:37.458 11:24:44 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:42:37.458 11:24:44 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:37.458 11:24:44 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:37.458 ************************************ 00:42:37.458 START TEST bdev_write_zeroes 00:42:37.458 ************************************ 00:42:37.458 11:24:44 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:37.717 [2024-07-25 11:24:44.587011] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:37.717 [2024-07-25 11:24:44.587127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855748 ] 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:37.717 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:37.717 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:37.717 [2024-07-25 11:24:44.812301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:38.282 [2024-07-25 11:24:45.098179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:38.847 [2024-07-25 11:24:45.679592] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:42:38.847 [2024-07-25 11:24:45.679673] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:42:38.847 [2024-07-25 11:24:45.679692] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:38.847 [2024-07-25 11:24:45.687605] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw2" 00:42:38.847 [2024-07-25 11:24:45.687644] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:42:38.847 [2024-07-25 11:24:45.687660] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:38.847 [2024-07-25 11:24:45.695633] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw3" 00:42:38.847 [2024-07-25 11:24:45.695669] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: crypto_ram2 00:42:38.847 [2024-07-25 11:24:45.695684] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:42:38.847 Running I/O for 1 seconds... 00:42:39.780 00:42:39.780 Latency(us) 00:42:39.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:39.780 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:39.780 crypto_ram : 1.01 26650.78 104.10 0.00 0.00 4791.40 1304.17 6658.46 00:42:39.780 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:42:39.780 crypto_ram3 : 1.01 13298.10 51.95 0.00 0.00 9554.32 6029.31 9909.04 00:42:39.780 =================================================================================================================== 00:42:39.780 Total : 39948.87 156.05 0.00 0.00 6379.04 1304.17 9909.04 00:42:41.676 00:42:41.676 real 0m4.188s 00:42:41.676 user 0m3.774s 00:42:41.676 sys 0m0.383s 00:42:41.676 11:24:48 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:41.676 11:24:48 blockdev_crypto_sw.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:42:41.676 ************************************ 00:42:41.676 END TEST bdev_write_zeroes 00:42:41.676 ************************************ 00:42:41.676 11:24:48 blockdev_crypto_sw -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:41.676 11:24:48 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:42:41.676 11:24:48 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:41.676 11:24:48 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:41.676 ************************************ 00:42:41.676 START TEST bdev_json_nonenclosed 00:42:41.676 ************************************ 00:42:41.676 11:24:48 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:41.933 [2024-07-25 11:24:48.848817] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:41.933 [2024-07-25 11:24:48.848933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856482 ] 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:41.933 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:41.933 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:42.191 [2024-07-25 11:24:49.060907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:42.449 [2024-07-25 11:24:49.328609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.449 [2024-07-25 11:24:49.328690] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:42:42.449 [2024-07-25 11:24:49.328718] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:42.449 [2024-07-25 11:24:49.328734] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:43.013 00:42:43.013 real 0m1.144s 00:42:43.013 user 0m0.860s 00:42:43.013 sys 0m0.278s 00:42:43.013 11:24:49 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:43.013 11:24:49 blockdev_crypto_sw.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:42:43.013 ************************************ 00:42:43.013 END TEST bdev_json_nonenclosed 00:42:43.013 ************************************ 00:42:43.013 11:24:49 blockdev_crypto_sw -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:43.013 11:24:49 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:42:43.013 11:24:49 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:43.013 11:24:49 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:43.013 ************************************ 00:42:43.013 START TEST bdev_json_nonarray 00:42:43.013 ************************************ 00:42:43.013 11:24:49 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:42:43.013 [2024-07-25 11:24:50.078667] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:43.013 [2024-07-25 11:24:50.078791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856594 ] 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:43.271 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:43.271 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:43.271 [2024-07-25 11:24:50.302557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:43.528 [2024-07-25 11:24:50.596640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:43.528 [2024-07-25 11:24:50.596730] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:42:43.528 [2024-07-25 11:24:50.596757] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:42:43.528 [2024-07-25 11:24:50.596773] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:42:44.094 00:42:44.094 real 0m1.215s 00:42:44.094 user 0m0.958s 00:42:44.094 sys 0m0.251s 00:42:44.094 11:24:51 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:44.094 11:24:51 blockdev_crypto_sw.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:42:44.094 ************************************ 00:42:44.094 END TEST bdev_json_nonarray 00:42:44.094 ************************************ 00:42:44.351 11:24:51 blockdev_crypto_sw -- bdev/blockdev.sh@786 -- # [[ crypto_sw == bdev ]] 00:42:44.351 11:24:51 blockdev_crypto_sw -- bdev/blockdev.sh@793 -- # [[ crypto_sw == gpt ]] 00:42:44.351 11:24:51 blockdev_crypto_sw -- bdev/blockdev.sh@797 -- # [[ crypto_sw == crypto_sw ]] 00:42:44.351 11:24:51 blockdev_crypto_sw -- bdev/blockdev.sh@798 -- # run_test bdev_crypto_enomem bdev_crypto_enomem 00:42:44.351 11:24:51 blockdev_crypto_sw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:42:44.351 11:24:51 blockdev_crypto_sw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:44.351 11:24:51 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:44.351 ************************************ 00:42:44.351 START TEST bdev_crypto_enomem 00:42:44.351 ************************************ 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1125 -- # bdev_crypto_enomem 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@634 -- # local base_dev=base0 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@635 -- # local test_dev=crypt0 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@636 -- # local err_dev=EE_base0 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@637 -- # local qd=32 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@640 -- # ERR_PID=3856854 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@641 -- # trap 'cleanup; killprocess $ERR_PID; exit 1' SIGINT SIGTERM EXIT 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@639 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 32 -o 4096 -w randwrite -t 5 -f '' 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@642 -- # waitforlisten 3856854 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@831 -- # '[' -z 3856854 ']' 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:44.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:44.351 11:24:51 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:44.351 [2024-07-25 11:24:51.381425] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:44.351 [2024-07-25 11:24:51.381549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856854 ] 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:44.616 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:44.616 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:44.616 [2024-07-25 11:24:51.595314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:44.874 [2024-07-25 11:24:51.867055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@864 -- # return 0 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@644 -- # rpc_cmd 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:45.439 true 00:42:45.439 base0 00:42:45.439 true 00:42:45.439 [2024-07-25 11:24:52.383609] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_sw" 00:42:45.439 crypt0 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@651 -- # waitforbdev crypt0 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@899 -- # local bdev_name=crypt0 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@901 -- # local i 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b crypt0 -t 2000 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:45.439 [ 00:42:45.439 { 00:42:45.439 "name": "crypt0", 00:42:45.439 "aliases": [ 00:42:45.439 "aad5e091-9c3e-5bea-bb7e-43517daa8140" 00:42:45.439 ], 00:42:45.439 "product_name": "crypto", 00:42:45.439 "block_size": 512, 00:42:45.439 "num_blocks": 2097152, 00:42:45.439 "uuid": "aad5e091-9c3e-5bea-bb7e-43517daa8140", 00:42:45.439 "assigned_rate_limits": { 00:42:45.439 "rw_ios_per_sec": 0, 00:42:45.439 "rw_mbytes_per_sec": 0, 00:42:45.439 "r_mbytes_per_sec": 0, 00:42:45.439 "w_mbytes_per_sec": 0 00:42:45.439 }, 00:42:45.439 "claimed": false, 00:42:45.439 "zoned": false, 00:42:45.439 "supported_io_types": { 00:42:45.439 "read": true, 00:42:45.439 "write": true, 00:42:45.439 "unmap": false, 00:42:45.439 "flush": false, 00:42:45.439 "reset": true, 00:42:45.439 "nvme_admin": false, 00:42:45.439 "nvme_io": false, 00:42:45.439 "nvme_io_md": false, 00:42:45.439 "write_zeroes": true, 00:42:45.439 "zcopy": false, 00:42:45.439 "get_zone_info": false, 00:42:45.439 "zone_management": false, 00:42:45.439 "zone_append": false, 00:42:45.439 "compare": false, 00:42:45.439 "compare_and_write": false, 00:42:45.439 "abort": false, 00:42:45.439 "seek_hole": false, 00:42:45.439 "seek_data": false, 00:42:45.439 "copy": false, 00:42:45.439 "nvme_iov_md": false 00:42:45.439 }, 00:42:45.439 "memory_domains": [ 00:42:45.439 { 00:42:45.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:45.439 "dma_device_type": 2 00:42:45.439 } 00:42:45.439 ], 00:42:45.439 "driver_specific": { 00:42:45.439 "crypto": { 00:42:45.439 "base_bdev_name": "EE_base0", 00:42:45.439 "name": "crypt0", 00:42:45.439 "key_name": "test_dek_sw" 00:42:45.439 } 00:42:45.439 } 00:42:45.439 } 00:42:45.439 ] 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@907 -- # return 0 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@654 -- # rpcpid=3857117 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@656 -- # sleep 1 00:42:45.439 11:24:52 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@653 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:42:45.439 Running I/O for 5 seconds... 00:42:46.372 11:24:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@657 -- # rpc_cmd bdev_error_inject_error EE_base0 -n 5 -q 31 write nomem 00:42:46.372 11:24:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:46.372 11:24:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:46.372 11:24:53 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:46.372 11:24:53 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@659 -- # wait 3857117 00:42:51.259 00:42:51.259 Latency(us) 00:42:51.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.259 Job: crypt0 (Core Mask 0x2, workload: randwrite, depth: 32, IO size: 4096) 00:42:51.259 crypt0 : 5.00 36183.45 141.34 0.00 0.00 880.84 419.43 1218.97 00:42:51.259 =================================================================================================================== 00:42:51.259 Total : 36183.45 141.34 0.00 0.00 880.84 419.43 1218.97 00:42:51.259 0 00:42:51.259 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@661 -- # rpc_cmd bdev_crypto_delete crypt0 00:42:51.259 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:51.259 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:51.259 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:51.259 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@663 -- # killprocess 3856854 00:42:51.259 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@950 -- # '[' -z 3856854 ']' 00:42:51.260 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@954 -- # kill -0 3856854 00:42:51.260 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # uname 00:42:51.260 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:51.260 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3856854 00:42:51.260 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:42:51.260 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:42:51.260 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3856854' 00:42:51.260 killing process with pid 3856854 00:42:51.260 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@969 -- # kill 3856854 00:42:51.260 Received shutdown signal, test time was about 5.000000 seconds 00:42:51.260 00:42:51.260 Latency(us) 00:42:51.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:51.260 =================================================================================================================== 00:42:51.260 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:42:51.260 11:24:57 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@974 -- # wait 3856854 00:42:52.634 11:24:59 blockdev_crypto_sw.bdev_crypto_enomem -- bdev/blockdev.sh@664 -- # trap - SIGINT SIGTERM EXIT 00:42:52.634 00:42:52.634 real 0m8.087s 00:42:52.634 user 0m8.172s 00:42:52.634 sys 0m0.521s 00:42:52.634 11:24:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:52.634 11:24:59 blockdev_crypto_sw.bdev_crypto_enomem -- common/autotest_common.sh@10 -- # set +x 00:42:52.634 ************************************ 00:42:52.634 END TEST bdev_crypto_enomem 00:42:52.634 ************************************ 00:42:52.634 11:24:59 blockdev_crypto_sw -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:42:52.634 11:24:59 blockdev_crypto_sw -- bdev/blockdev.sh@810 -- # cleanup 00:42:52.634 11:24:59 blockdev_crypto_sw -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:42:52.634 11:24:59 blockdev_crypto_sw -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:42:52.634 11:24:59 blockdev_crypto_sw -- bdev/blockdev.sh@26 -- # [[ crypto_sw == rbd ]] 00:42:52.634 11:24:59 blockdev_crypto_sw -- bdev/blockdev.sh@30 -- # [[ crypto_sw == daos ]] 00:42:52.634 11:24:59 blockdev_crypto_sw -- bdev/blockdev.sh@34 -- # [[ crypto_sw = \g\p\t ]] 00:42:52.634 11:24:59 blockdev_crypto_sw -- bdev/blockdev.sh@40 -- # [[ crypto_sw == xnvme ]] 00:42:52.634 00:42:52.634 real 1m18.981s 00:42:52.634 user 2m18.658s 00:42:52.634 sys 0m8.462s 00:42:52.634 11:24:59 blockdev_crypto_sw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:42:52.634 11:24:59 blockdev_crypto_sw -- common/autotest_common.sh@10 -- # set +x 00:42:52.634 ************************************ 00:42:52.634 END TEST blockdev_crypto_sw 00:42:52.634 ************************************ 00:42:52.634 11:24:59 -- spdk/autotest.sh@363 -- # run_test blockdev_crypto_qat /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:42:52.634 11:24:59 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:42:52.634 11:24:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:42:52.634 11:24:59 -- common/autotest_common.sh@10 -- # set +x 00:42:52.634 ************************************ 00:42:52.634 START TEST blockdev_crypto_qat 00:42:52.634 ************************************ 00:42:52.634 11:24:59 blockdev_crypto_qat -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh crypto_qat 00:42:52.634 * Looking for test storage... 00:42:52.634 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/nbd_common.sh@6 -- # set -e 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@20 -- # : 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # uname -s 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@681 -- # test_type=crypto_qat 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@682 -- # crypto_device= 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@683 -- # dek= 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@684 -- # env_ctx= 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == bdev ]] 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@689 -- # [[ crypto_qat == crypto_* ]] 00:42:52.634 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc 00:42:52.635 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:42:52.635 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=3858237 00:42:52.635 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:42:52.635 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@49 -- # waitforlisten 3858237 00:42:52.635 11:24:59 blockdev_crypto_qat -- common/autotest_common.sh@831 -- # '[' -z 3858237 ']' 00:42:52.635 11:24:59 blockdev_crypto_qat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:52.635 11:24:59 blockdev_crypto_qat -- common/autotest_common.sh@836 -- # local max_retries=100 00:42:52.635 11:24:59 blockdev_crypto_qat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:52.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:52.635 11:24:59 blockdev_crypto_qat -- common/autotest_common.sh@840 -- # xtrace_disable 00:42:52.635 11:24:59 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:52.635 11:24:59 blockdev_crypto_qat -- bdev/blockdev.sh@46 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:42:52.635 [2024-07-25 11:24:59.709427] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:42:52.635 [2024-07-25 11:24:59.709550] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3858237 ] 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:01.0 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:01.1 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:01.2 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:01.3 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:01.4 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:01.5 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:01.6 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:01.7 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:02.0 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:02.1 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:02.2 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:02.3 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:02.4 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:02.5 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:02.6 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3d:02.7 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:01.0 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:01.1 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:01.2 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:01.3 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:01.4 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:01.5 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:01.6 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:01.7 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:02.0 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:02.1 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:02.2 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:02.3 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:02.4 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:02.5 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:02.6 cannot be used 00:42:52.893 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:42:52.893 EAL: Requested device 0000:3f:02.7 cannot be used 00:42:52.893 [2024-07-25 11:24:59.938255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:53.151 [2024-07-25 11:25:00.230751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:42:53.716 11:25:00 blockdev_crypto_qat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:42:53.716 11:25:00 blockdev_crypto_qat -- common/autotest_common.sh@864 -- # return 0 00:42:53.716 11:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:42:53.716 11:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@707 -- # setup_crypto_qat_conf 00:42:53.716 11:25:00 blockdev_crypto_qat -- bdev/blockdev.sh@169 -- # rpc_cmd 00:42:53.716 11:25:00 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:53.716 11:25:00 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:53.974 [2024-07-25 11:25:00.837096] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:42:53.974 [2024-07-25 11:25:00.845160] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:42:53.974 [2024-07-25 11:25:00.853173] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:42:54.232 [2024-07-25 11:25:01.211520] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:42:58.416 true 00:42:58.416 true 00:42:58.416 true 00:42:58.416 true 00:42:58.416 Malloc0 00:42:58.416 Malloc1 00:42:58.416 Malloc2 00:42:58.416 Malloc3 00:42:58.416 [2024-07-25 11:25:05.032941] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:42:58.416 crypto_ram 00:42:58.416 [2024-07-25 11:25:05.041037] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:42:58.416 crypto_ram1 00:42:58.416 [2024-07-25 11:25:05.049172] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:42:58.416 crypto_ram2 00:42:58.416 [2024-07-25 11:25:05.057205] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:42:58.416 crypto_ram3 00:42:58.416 [ 00:42:58.416 { 00:42:58.416 "name": "Malloc1", 00:42:58.416 "aliases": [ 00:42:58.416 "df38364e-2506-4111-a407-1f0bbbb46615" 00:42:58.416 ], 00:42:58.416 "product_name": "Malloc disk", 00:42:58.416 "block_size": 512, 00:42:58.416 "num_blocks": 65536, 00:42:58.416 "uuid": "df38364e-2506-4111-a407-1f0bbbb46615", 00:42:58.416 "assigned_rate_limits": { 00:42:58.416 "rw_ios_per_sec": 0, 00:42:58.416 "rw_mbytes_per_sec": 0, 00:42:58.416 "r_mbytes_per_sec": 0, 00:42:58.416 "w_mbytes_per_sec": 0 00:42:58.416 }, 00:42:58.416 "claimed": true, 00:42:58.416 "claim_type": "exclusive_write", 00:42:58.416 "zoned": false, 00:42:58.416 "supported_io_types": { 00:42:58.416 "read": true, 00:42:58.416 "write": true, 00:42:58.416 "unmap": true, 00:42:58.416 "flush": true, 00:42:58.416 "reset": true, 00:42:58.416 "nvme_admin": false, 00:42:58.416 "nvme_io": false, 00:42:58.416 "nvme_io_md": false, 00:42:58.416 "write_zeroes": true, 00:42:58.416 "zcopy": true, 00:42:58.416 "get_zone_info": false, 00:42:58.416 "zone_management": false, 00:42:58.416 "zone_append": false, 00:42:58.416 "compare": false, 00:42:58.416 "compare_and_write": false, 00:42:58.416 "abort": true, 00:42:58.416 "seek_hole": false, 00:42:58.416 "seek_data": false, 00:42:58.416 "copy": true, 00:42:58.416 "nvme_iov_md": false 00:42:58.416 }, 00:42:58.416 "memory_domains": [ 00:42:58.416 { 00:42:58.416 "dma_device_id": "system", 00:42:58.416 "dma_device_type": 1 00:42:58.416 }, 00:42:58.416 { 00:42:58.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:42:58.416 "dma_device_type": 2 00:42:58.416 } 00:42:58.416 ], 00:42:58.416 "driver_specific": {} 00:42:58.416 } 00:42:58.416 ] 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:58.416 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:58.416 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # cat 00:42:58.416 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:58.416 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:58.416 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:58.416 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:42:58.416 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:42:58.416 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@561 -- # xtrace_disable 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:42:58.416 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:42:58.416 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:42:58.416 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # jq -r .name 00:42:58.417 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "537db023-938a-596a-9ab8-e2b72c6a288a"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "537db023-938a-596a-9ab8-e2b72c6a288a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "137976db-ab79-5571-b56a-9062baf2ea18"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "137976db-ab79-5571-b56a-9062baf2ea18",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "f10d28a0-dea7-57ab-a564-7b1b64fdea7d"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "f10d28a0-dea7-57ab-a564-7b1b64fdea7d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "84f55873-803a-5aee-9d96-da8c7ca39489"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "84f55873-803a-5aee-9d96-da8c7ca39489",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:42:58.417 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:42:58.417 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@751 -- # hello_world_bdev=crypto_ram 00:42:58.417 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:42:58.417 11:25:05 blockdev_crypto_qat -- bdev/blockdev.sh@753 -- # killprocess 3858237 00:42:58.417 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@950 -- # '[' -z 3858237 ']' 00:42:58.417 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@954 -- # kill -0 3858237 00:42:58.417 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # uname 00:42:58.417 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:42:58.417 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3858237 00:42:58.417 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:42:58.417 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:42:58.417 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3858237' 00:42:58.417 killing process with pid 3858237 00:42:58.417 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@969 -- # kill 3858237 00:42:58.417 11:25:05 blockdev_crypto_qat -- common/autotest_common.sh@974 -- # wait 3858237 00:43:02.598 11:25:09 blockdev_crypto_qat -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:02.598 11:25:09 blockdev_crypto_qat -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:43:02.598 11:25:09 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:43:02.598 11:25:09 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:02.598 11:25:09 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:43:02.598 ************************************ 00:43:02.598 START TEST bdev_hello_world 00:43:02.598 ************************************ 00:43:02.598 11:25:09 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b crypto_ram '' 00:43:02.598 [2024-07-25 11:25:09.672920] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:43:02.598 [2024-07-25 11:25:09.673029] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3859830 ] 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:02.856 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:02.856 EAL: Requested device 0000:3f:02.7 cannot be used 00:43:02.856 [2024-07-25 11:25:09.896674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:03.114 [2024-07-25 11:25:10.176406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:03.114 [2024-07-25 11:25:10.198172] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:43:03.114 [2024-07-25 11:25:10.206194] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:43:03.114 [2024-07-25 11:25:10.214206] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:43:03.681 [2024-07-25 11:25:10.601763] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:43:06.961 [2024-07-25 11:25:13.418696] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:43:06.961 [2024-07-25 11:25:13.418775] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:43:06.961 [2024-07-25 11:25:13.418795] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:06.961 [2024-07-25 11:25:13.426716] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:43:06.961 [2024-07-25 11:25:13.426755] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:43:06.961 [2024-07-25 11:25:13.426772] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:06.961 [2024-07-25 11:25:13.434752] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:43:06.961 [2024-07-25 11:25:13.434790] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:43:06.961 [2024-07-25 11:25:13.434805] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:06.961 [2024-07-25 11:25:13.442745] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:43:06.961 [2024-07-25 11:25:13.442783] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:43:06.961 [2024-07-25 11:25:13.442799] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:06.961 [2024-07-25 11:25:13.703160] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:43:06.961 [2024-07-25 11:25:13.703205] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev crypto_ram 00:43:06.961 [2024-07-25 11:25:13.703232] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:43:06.961 [2024-07-25 11:25:13.705502] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:43:06.961 [2024-07-25 11:25:13.705606] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:43:06.961 [2024-07-25 11:25:13.705628] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:43:06.961 [2024-07-25 11:25:13.705690] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:43:06.961 00:43:06.961 [2024-07-25 11:25:13.705715] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:43:09.487 00:43:09.487 real 0m6.594s 00:43:09.487 user 0m6.005s 00:43:09.487 sys 0m0.535s 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:43:09.487 ************************************ 00:43:09.487 END TEST bdev_hello_world 00:43:09.487 ************************************ 00:43:09.487 11:25:16 blockdev_crypto_qat -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:43:09.487 11:25:16 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:09.487 11:25:16 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:09.487 11:25:16 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:43:09.487 ************************************ 00:43:09.487 START TEST bdev_bounds 00:43:09.487 ************************************ 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=3860902 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 3860902' 00:43:09.487 Process bdevio pid: 3860902 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 3860902 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 3860902 ']' 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:09.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@288 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:43:09.487 11:25:16 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:09.487 [2024-07-25 11:25:16.347342] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:43:09.487 [2024-07-25 11:25:16.347463] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3860902 ] 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:09.487 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:09.487 EAL: Requested device 0000:3f:02.7 cannot be used 00:43:09.487 [2024-07-25 11:25:16.575309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:10.052 [2024-07-25 11:25:16.870571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:43:10.052 [2024-07-25 11:25:16.870588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:43:10.052 [2024-07-25 11:25:16.870588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:10.052 [2024-07-25 11:25:16.892449] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:43:10.052 [2024-07-25 11:25:16.900464] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:43:10.052 [2024-07-25 11:25:16.908497] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:43:10.309 [2024-07-25 11:25:17.308089] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:43:13.649 [2024-07-25 11:25:20.122921] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:43:13.649 [2024-07-25 11:25:20.122998] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:43:13.649 [2024-07-25 11:25:20.123021] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:13.649 [2024-07-25 11:25:20.130936] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:43:13.649 [2024-07-25 11:25:20.130973] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:43:13.649 [2024-07-25 11:25:20.130989] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:13.649 [2024-07-25 11:25:20.138993] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:43:13.649 [2024-07-25 11:25:20.139026] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:43:13.649 [2024-07-25 11:25:20.139041] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:13.649 [2024-07-25 11:25:20.146988] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:43:13.649 [2024-07-25 11:25:20.147039] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:43:13.649 [2024-07-25 11:25:20.147057] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:13.649 11:25:20 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:13.649 11:25:20 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:43:13.649 11:25:20 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@293 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:43:13.649 I/O targets: 00:43:13.649 crypto_ram: 65536 blocks of 512 bytes (32 MiB) 00:43:13.649 crypto_ram1: 65536 blocks of 512 bytes (32 MiB) 00:43:13.649 crypto_ram2: 8192 blocks of 4096 bytes (32 MiB) 00:43:13.649 crypto_ram3: 8192 blocks of 4096 bytes (32 MiB) 00:43:13.649 00:43:13.649 00:43:13.649 CUnit - A unit testing framework for C - Version 2.1-3 00:43:13.649 http://cunit.sourceforge.net/ 00:43:13.649 00:43:13.649 00:43:13.649 Suite: bdevio tests on: crypto_ram3 00:43:13.649 Test: blockdev write read block ...passed 00:43:13.649 Test: blockdev write zeroes read block ...passed 00:43:13.908 Test: blockdev write zeroes read no split ...passed 00:43:13.908 Test: blockdev write zeroes read split ...passed 00:43:13.908 Test: blockdev write zeroes read split partial ...passed 00:43:13.908 Test: blockdev reset ...passed 00:43:13.908 Test: blockdev write read 8 blocks ...passed 00:43:13.908 Test: blockdev write read size > 128k ...passed 00:43:13.908 Test: blockdev write read invalid size ...passed 00:43:13.908 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:13.908 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:13.908 Test: blockdev write read max offset ...passed 00:43:13.908 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:13.908 Test: blockdev writev readv 8 blocks ...passed 00:43:13.908 Test: blockdev writev readv 30 x 1block ...passed 00:43:13.908 Test: blockdev writev readv block ...passed 00:43:13.908 Test: blockdev writev readv size > 128k ...passed 00:43:13.908 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:13.908 Test: blockdev comparev and writev ...passed 00:43:13.908 Test: blockdev nvme passthru rw ...passed 00:43:13.908 Test: blockdev nvme passthru vendor specific ...passed 00:43:13.908 Test: blockdev nvme admin passthru ...passed 00:43:13.908 Test: blockdev copy ...passed 00:43:13.908 Suite: bdevio tests on: crypto_ram2 00:43:13.908 Test: blockdev write read block ...passed 00:43:13.908 Test: blockdev write zeroes read block ...passed 00:43:13.908 Test: blockdev write zeroes read no split ...passed 00:43:13.908 Test: blockdev write zeroes read split ...passed 00:43:13.908 Test: blockdev write zeroes read split partial ...passed 00:43:13.908 Test: blockdev reset ...passed 00:43:13.908 Test: blockdev write read 8 blocks ...passed 00:43:13.908 Test: blockdev write read size > 128k ...passed 00:43:13.908 Test: blockdev write read invalid size ...passed 00:43:13.908 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:13.908 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:13.908 Test: blockdev write read max offset ...passed 00:43:13.908 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:13.908 Test: blockdev writev readv 8 blocks ...passed 00:43:13.908 Test: blockdev writev readv 30 x 1block ...passed 00:43:13.908 Test: blockdev writev readv block ...passed 00:43:13.908 Test: blockdev writev readv size > 128k ...passed 00:43:13.908 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:13.908 Test: blockdev comparev and writev ...passed 00:43:13.908 Test: blockdev nvme passthru rw ...passed 00:43:13.908 Test: blockdev nvme passthru vendor specific ...passed 00:43:13.908 Test: blockdev nvme admin passthru ...passed 00:43:13.908 Test: blockdev copy ...passed 00:43:13.908 Suite: bdevio tests on: crypto_ram1 00:43:13.908 Test: blockdev write read block ...passed 00:43:13.908 Test: blockdev write zeroes read block ...passed 00:43:13.908 Test: blockdev write zeroes read no split ...passed 00:43:14.166 Test: blockdev write zeroes read split ...passed 00:43:14.166 Test: blockdev write zeroes read split partial ...passed 00:43:14.166 Test: blockdev reset ...passed 00:43:14.166 Test: blockdev write read 8 blocks ...passed 00:43:14.166 Test: blockdev write read size > 128k ...passed 00:43:14.166 Test: blockdev write read invalid size ...passed 00:43:14.166 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:14.166 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:14.166 Test: blockdev write read max offset ...passed 00:43:14.166 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:14.166 Test: blockdev writev readv 8 blocks ...passed 00:43:14.166 Test: blockdev writev readv 30 x 1block ...passed 00:43:14.166 Test: blockdev writev readv block ...passed 00:43:14.166 Test: blockdev writev readv size > 128k ...passed 00:43:14.166 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:14.166 Test: blockdev comparev and writev ...passed 00:43:14.166 Test: blockdev nvme passthru rw ...passed 00:43:14.166 Test: blockdev nvme passthru vendor specific ...passed 00:43:14.166 Test: blockdev nvme admin passthru ...passed 00:43:14.166 Test: blockdev copy ...passed 00:43:14.166 Suite: bdevio tests on: crypto_ram 00:43:14.166 Test: blockdev write read block ...passed 00:43:14.166 Test: blockdev write zeroes read block ...passed 00:43:14.166 Test: blockdev write zeroes read no split ...passed 00:43:14.166 Test: blockdev write zeroes read split ...passed 00:43:14.425 Test: blockdev write zeroes read split partial ...passed 00:43:14.425 Test: blockdev reset ...passed 00:43:14.425 Test: blockdev write read 8 blocks ...passed 00:43:14.425 Test: blockdev write read size > 128k ...passed 00:43:14.425 Test: blockdev write read invalid size ...passed 00:43:14.425 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:14.425 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:14.425 Test: blockdev write read max offset ...passed 00:43:14.425 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:14.425 Test: blockdev writev readv 8 blocks ...passed 00:43:14.425 Test: blockdev writev readv 30 x 1block ...passed 00:43:14.425 Test: blockdev writev readv block ...passed 00:43:14.425 Test: blockdev writev readv size > 128k ...passed 00:43:14.425 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:14.425 Test: blockdev comparev and writev ...passed 00:43:14.425 Test: blockdev nvme passthru rw ...passed 00:43:14.425 Test: blockdev nvme passthru vendor specific ...passed 00:43:14.425 Test: blockdev nvme admin passthru ...passed 00:43:14.425 Test: blockdev copy ...passed 00:43:14.425 00:43:14.425 Run Summary: Type Total Ran Passed Failed Inactive 00:43:14.425 suites 4 4 n/a 0 0 00:43:14.425 tests 92 92 92 0 0 00:43:14.425 asserts 520 520 520 0 n/a 00:43:14.425 00:43:14.425 Elapsed time = 1.541 seconds 00:43:14.425 0 00:43:14.425 11:25:21 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 3860902 00:43:14.425 11:25:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 3860902 ']' 00:43:14.425 11:25:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 3860902 00:43:14.425 11:25:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:43:14.425 11:25:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:14.425 11:25:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3860902 00:43:14.425 11:25:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:14.425 11:25:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:14.425 11:25:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3860902' 00:43:14.425 killing process with pid 3860902 00:43:14.425 11:25:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@969 -- # kill 3860902 00:43:14.425 11:25:21 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@974 -- # wait 3860902 00:43:16.955 11:25:23 blockdev_crypto_qat.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:43:16.955 00:43:16.955 real 0m7.758s 00:43:16.955 user 0m20.968s 00:43:16.955 sys 0m0.786s 00:43:16.955 11:25:23 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:16.955 11:25:24 blockdev_crypto_qat.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:16.955 ************************************ 00:43:16.955 END TEST bdev_bounds 00:43:16.955 ************************************ 00:43:16.955 11:25:24 blockdev_crypto_qat -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:43:16.955 11:25:24 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:43:16.955 11:25:24 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:16.955 11:25:24 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:43:17.214 ************************************ 00:43:17.214 START TEST bdev_nbd 00:43:17.214 ************************************ 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '' 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=4 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=4 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=3862240 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@316 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 3862240 /var/tmp/spdk-nbd.sock 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 3862240 ']' 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:43:17.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:43:17.214 11:25:24 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:17.214 [2024-07-25 11:25:24.194032] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:43:17.214 [2024-07-25 11:25:24.194163] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:17.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.214 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:17.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.214 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:17.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.214 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:17.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.214 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:17.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.214 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:17.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.214 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:17.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.214 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:17.214 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.214 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:17.472 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.472 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:17.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.473 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:17.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.473 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:17.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.473 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:17.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.473 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:17.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.473 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:17.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.473 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:17.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.473 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:17.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.473 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:17.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.473 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:17.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.473 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:17.473 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:17.473 EAL: Requested device 0000:3f:02.7 cannot be used 00:43:17.473 [2024-07-25 11:25:24.420666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:17.731 [2024-07-25 11:25:24.707475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:43:17.731 [2024-07-25 11:25:24.729255] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:43:17.731 [2024-07-25 11:25:24.737292] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:43:17.731 [2024-07-25 11:25:24.745315] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:43:18.297 [2024-07-25 11:25:25.137876] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:43:21.582 [2024-07-25 11:25:28.004369] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:43:21.582 [2024-07-25 11:25:28.004436] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:43:21.582 [2024-07-25 11:25:28.004456] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:21.582 [2024-07-25 11:25:28.012386] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:43:21.582 [2024-07-25 11:25:28.012426] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:43:21.582 [2024-07-25 11:25:28.012443] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:21.582 [2024-07-25 11:25:28.020432] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:43:21.582 [2024-07-25 11:25:28.020469] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:43:21.582 [2024-07-25 11:25:28.020484] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:21.582 [2024-07-25 11:25:28.028412] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:43:21.582 [2024-07-25 11:25:28.028448] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:43:21.582 [2024-07-25 11:25:28.028463] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:43:21.582 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:21.840 1+0 records in 00:43:21.840 1+0 records out 00:43:21.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299204 s, 13.7 MB/s 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:21.840 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:22.099 1+0 records in 00:43:22.099 1+0 records out 00:43:22.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263919 s, 15.5 MB/s 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:22.099 11:25:28 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 00:43:22.357 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:43:22.357 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:43:22.357 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:43:22.357 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:43:22.357 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:22.357 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:22.357 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:22.357 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:43:22.358 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:22.358 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:22.358 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:22.358 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:22.358 1+0 records in 00:43:22.358 1+0 records out 00:43:22.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341285 s, 12.0 MB/s 00:43:22.358 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:22.358 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:22.358 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:22.358 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:22.358 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:22.358 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:22.358 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:22.358 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:22.616 1+0 records in 00:43:22.616 1+0 records out 00:43:22.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381403 s, 10.7 MB/s 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 4 )) 00:43:22.616 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:22.875 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:43:22.875 { 00:43:22.875 "nbd_device": "/dev/nbd0", 00:43:22.875 "bdev_name": "crypto_ram" 00:43:22.875 }, 00:43:22.875 { 00:43:22.875 "nbd_device": "/dev/nbd1", 00:43:22.875 "bdev_name": "crypto_ram1" 00:43:22.875 }, 00:43:22.875 { 00:43:22.875 "nbd_device": "/dev/nbd2", 00:43:22.875 "bdev_name": "crypto_ram2" 00:43:22.875 }, 00:43:22.875 { 00:43:22.875 "nbd_device": "/dev/nbd3", 00:43:22.875 "bdev_name": "crypto_ram3" 00:43:22.875 } 00:43:22.875 ]' 00:43:22.875 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:43:22.875 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:43:22.875 { 00:43:22.875 "nbd_device": "/dev/nbd0", 00:43:22.875 "bdev_name": "crypto_ram" 00:43:22.875 }, 00:43:22.875 { 00:43:22.875 "nbd_device": "/dev/nbd1", 00:43:22.875 "bdev_name": "crypto_ram1" 00:43:22.875 }, 00:43:22.875 { 00:43:22.875 "nbd_device": "/dev/nbd2", 00:43:22.875 "bdev_name": "crypto_ram2" 00:43:22.875 }, 00:43:22.875 { 00:43:22.875 "nbd_device": "/dev/nbd3", 00:43:22.875 "bdev_name": "crypto_ram3" 00:43:22.875 } 00:43:22.875 ]' 00:43:22.875 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:43:22.875 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3' 00:43:22.875 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:22.875 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3') 00:43:22.875 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:22.875 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:22.875 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:22.875 11:25:29 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:23.133 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:43:23.391 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:43:23.391 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:43:23.391 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:43:23.391 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:23.391 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:23.391 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:43:23.391 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:23.391 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:23.391 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:23.391 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:43:23.649 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:43:23.649 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:43:23.649 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:43:23.649 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:23.649 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:23.649 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:43:23.649 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:23.649 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:23.649 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:23.649 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:23.649 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'crypto_ram crypto_ram1 crypto_ram2 crypto_ram3' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('crypto_ram' 'crypto_ram1' 'crypto_ram2' 'crypto_ram3') 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:23.907 11:25:30 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram /dev/nbd0 00:43:24.165 /dev/nbd0 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:24.165 1+0 records in 00:43:24.165 1+0 records out 00:43:24.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285694 s, 14.3 MB/s 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:24.165 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram1 /dev/nbd1 00:43:24.422 /dev/nbd1 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:24.422 1+0 records in 00:43:24.422 1+0 records out 00:43:24.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039514 s, 10.4 MB/s 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:24.422 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram2 /dev/nbd10 00:43:24.680 /dev/nbd10 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:24.680 1+0 records in 00:43:24.680 1+0 records out 00:43:24.680 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326854 s, 12.5 MB/s 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:24.680 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk crypto_ram3 /dev/nbd11 00:43:24.938 /dev/nbd11 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:24.938 1+0 records in 00:43:24.938 1+0 records out 00:43:24.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329215 s, 12.4 MB/s 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:43:24.938 11:25:31 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:43:24.938 11:25:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:43:24.938 11:25:32 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:43:24.938 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:24.938 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 4 )) 00:43:24.938 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:24.938 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:24.938 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:25.195 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:43:25.195 { 00:43:25.195 "nbd_device": "/dev/nbd0", 00:43:25.195 "bdev_name": "crypto_ram" 00:43:25.195 }, 00:43:25.195 { 00:43:25.195 "nbd_device": "/dev/nbd1", 00:43:25.195 "bdev_name": "crypto_ram1" 00:43:25.195 }, 00:43:25.195 { 00:43:25.195 "nbd_device": "/dev/nbd10", 00:43:25.195 "bdev_name": "crypto_ram2" 00:43:25.195 }, 00:43:25.195 { 00:43:25.196 "nbd_device": "/dev/nbd11", 00:43:25.196 "bdev_name": "crypto_ram3" 00:43:25.196 } 00:43:25.196 ]' 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:43:25.196 { 00:43:25.196 "nbd_device": "/dev/nbd0", 00:43:25.196 "bdev_name": "crypto_ram" 00:43:25.196 }, 00:43:25.196 { 00:43:25.196 "nbd_device": "/dev/nbd1", 00:43:25.196 "bdev_name": "crypto_ram1" 00:43:25.196 }, 00:43:25.196 { 00:43:25.196 "nbd_device": "/dev/nbd10", 00:43:25.196 "bdev_name": "crypto_ram2" 00:43:25.196 }, 00:43:25.196 { 00:43:25.196 "nbd_device": "/dev/nbd11", 00:43:25.196 "bdev_name": "crypto_ram3" 00:43:25.196 } 00:43:25.196 ]' 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:43:25.196 /dev/nbd1 00:43:25.196 /dev/nbd10 00:43:25.196 /dev/nbd11' 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:43:25.196 /dev/nbd1 00:43:25.196 /dev/nbd10 00:43:25.196 /dev/nbd11' 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=4 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 4 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=4 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 4 -ne 4 ']' 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' write 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:43:25.196 256+0 records in 00:43:25.196 256+0 records out 00:43:25.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114463 s, 91.6 MB/s 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:25.196 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:43:25.453 256+0 records in 00:43:25.453 256+0 records out 00:43:25.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0728717 s, 14.4 MB/s 00:43:25.453 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:25.453 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:43:25.453 256+0 records in 00:43:25.453 256+0 records out 00:43:25.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0494318 s, 21.2 MB/s 00:43:25.453 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:25.453 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:43:25.453 256+0 records in 00:43:25.453 256+0 records out 00:43:25.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0394148 s, 26.6 MB/s 00:43:25.453 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:43:25.454 256+0 records in 00:43:25.454 256+0 records out 00:43:25.454 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.036736 s, 28.5 MB/s 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' verify 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:25.454 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:25.711 11:25:32 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:43:25.968 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:43:25.968 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:43:25.968 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:43:25.968 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:25.968 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:25.968 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:43:25.968 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:25.968 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:25.968 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:25.969 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:43:26.226 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:43:26.226 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:43:26.226 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:43:26.226 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:26.226 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:26.226 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:43:26.226 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:26.226 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:26.226 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:26.226 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:43:26.483 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:43:26.483 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:43:26.483 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:43:26.483 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:26.483 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:26.483 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:43:26.483 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:26.483 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:26.483 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:26.483 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:26.483 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11' 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11') 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:43:26.741 11:25:33 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:43:26.998 malloc_lvol_verify 00:43:26.998 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:43:27.256 d82f9515-68e6-40f7-b2a0-12679cd6a130 00:43:27.256 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:43:27.514 da321c56-b24a-4cbd-8916-ce5ffff85a46 00:43:27.514 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:43:27.773 /dev/nbd0 00:43:27.773 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:43:27.773 mke2fs 1.46.5 (30-Dec-2021) 00:43:27.773 Discarding device blocks: 0/4096 done 00:43:27.773 Creating filesystem with 4096 1k blocks and 1024 inodes 00:43:27.773 00:43:27.773 Allocating group tables: 0/1 done 00:43:27.773 Writing inode tables: 0/1 done 00:43:27.773 Creating journal (1024 blocks): done 00:43:27.773 Writing superblocks and filesystem accounting information: 0/1 done 00:43:27.773 00:43:27.773 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:43:27.773 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:27.773 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:27.773 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:27.773 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:27.773 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:27.773 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:27.773 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 3862240 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 3862240 ']' 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 3862240 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3862240 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3862240' 00:43:28.031 killing process with pid 3862240 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@969 -- # kill 3862240 00:43:28.031 11:25:34 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@974 -- # wait 3862240 00:43:30.637 11:25:37 blockdev_crypto_qat.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:43:30.637 00:43:30.637 real 0m13.578s 00:43:30.637 user 0m16.399s 00:43:30.637 sys 0m3.899s 00:43:30.637 11:25:37 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:30.637 11:25:37 blockdev_crypto_qat.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:30.637 ************************************ 00:43:30.637 END TEST bdev_nbd 00:43:30.637 ************************************ 00:43:30.637 11:25:37 blockdev_crypto_qat -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:43:30.637 11:25:37 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = nvme ']' 00:43:30.637 11:25:37 blockdev_crypto_qat -- bdev/blockdev.sh@763 -- # '[' crypto_qat = gpt ']' 00:43:30.637 11:25:37 blockdev_crypto_qat -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:43:30.637 11:25:37 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:43:30.637 11:25:37 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:30.637 11:25:37 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:43:30.637 ************************************ 00:43:30.637 START TEST bdev_fio 00:43:30.637 ************************************ 00:43:30.637 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:43:30.637 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:43:30.637 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:43:30.637 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:43:30.637 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:43:30.637 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:43:30.637 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram]' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram1]' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram1 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram2]' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram2 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_crypto_ram3]' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=crypto_ram3 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:43:30.896 ************************************ 00:43:30.896 START TEST bdev_fio_rw_verify 00:43:30.896 ************************************ 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:30.896 11:25:37 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:31.462 job_crypto_ram: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:31.462 job_crypto_ram1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:31.462 job_crypto_ram2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:31.462 job_crypto_ram3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:31.462 fio-3.35 00:43:31.462 Starting 4 threads 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.462 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:31.462 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:31.463 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:31.463 EAL: Requested device 0000:3f:02.7 cannot be used 00:43:46.332 00:43:46.332 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3865457: Thu Jul 25 11:25:52 2024 00:43:46.332 read: IOPS=25.8k, BW=101MiB/s (106MB/s)(1006MiB/10001msec) 00:43:46.332 slat (usec): min=13, max=416, avg=53.44, stdev=35.66 00:43:46.332 clat (usec): min=18, max=2110, avg=298.09, stdev=210.39 00:43:46.333 lat (usec): min=49, max=2358, avg=351.52, stdev=231.49 00:43:46.333 clat percentiles (usec): 00:43:46.333 | 50.000th=[ 231], 99.000th=[ 1029], 99.900th=[ 1172], 99.990th=[ 1336], 00:43:46.333 | 99.999th=[ 1942] 00:43:46.333 write: IOPS=28.2k, BW=110MiB/s (115MB/s)(1072MiB/9740msec); 0 zone resets 00:43:46.333 slat (usec): min=15, max=472, avg=64.04, stdev=35.19 00:43:46.333 clat (usec): min=21, max=1799, avg=335.06, stdev=217.78 00:43:46.333 lat (usec): min=53, max=2147, avg=399.10, stdev=238.06 00:43:46.333 clat percentiles (usec): 00:43:46.333 | 50.000th=[ 277], 99.000th=[ 1074], 99.900th=[ 1205], 99.990th=[ 1287], 00:43:46.333 | 99.999th=[ 1401] 00:43:46.333 bw ( KiB/s): min=95368, max=145896, per=98.04%, avg=110510.74, stdev=3141.84, samples=76 00:43:46.333 iops : min=23842, max=36474, avg=27627.68, stdev=785.46, samples=76 00:43:46.333 lat (usec) : 20=0.01%, 50=0.02%, 100=7.23%, 250=41.76%, 500=35.21% 00:43:46.333 lat (usec) : 750=9.83%, 1000=4.33% 00:43:46.333 lat (msec) : 2=1.62%, 4=0.01% 00:43:46.333 cpu : usr=99.30%, sys=0.25%, ctx=94, majf=0, minf=26454 00:43:46.333 IO depths : 1=3.0%, 2=27.7%, 4=55.4%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:46.333 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.333 complete : 0=0.0%, 4=87.8%, 8=12.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:46.333 issued rwts: total=257641,274478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:46.333 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:46.333 00:43:46.333 Run status group 0 (all jobs): 00:43:46.333 READ: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=1006MiB (1055MB), run=10001-10001msec 00:43:46.333 WRITE: bw=110MiB/s (115MB/s), 110MiB/s-110MiB/s (115MB/s-115MB/s), io=1072MiB (1124MB), run=9740-9740msec 00:43:48.231 ----------------------------------------------------- 00:43:48.231 Suppressions used: 00:43:48.231 count bytes template 00:43:48.231 4 47 /usr/src/fio/parse.c 00:43:48.231 553 53088 /usr/src/fio/iolog.c 00:43:48.231 1 8 libtcmalloc_minimal.so 00:43:48.231 1 904 libcrypto.so 00:43:48.231 ----------------------------------------------------- 00:43:48.231 00:43:48.231 00:43:48.231 real 0m17.186s 00:43:48.231 user 0m57.093s 00:43:48.231 sys 0m0.893s 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:43:48.231 ************************************ 00:43:48.231 END TEST bdev_fio_rw_verify 00:43:48.231 ************************************ 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1299 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:43:48.231 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "537db023-938a-596a-9ab8-e2b72c6a288a"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "537db023-938a-596a-9ab8-e2b72c6a288a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "137976db-ab79-5571-b56a-9062baf2ea18"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "137976db-ab79-5571-b56a-9062baf2ea18",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "f10d28a0-dea7-57ab-a564-7b1b64fdea7d"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "f10d28a0-dea7-57ab-a564-7b1b64fdea7d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "84f55873-803a-5aee-9d96-da8c7ca39489"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "84f55873-803a-5aee-9d96-da8c7ca39489",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n crypto_ram 00:43:48.232 crypto_ram1 00:43:48.232 crypto_ram2 00:43:48.232 crypto_ram3 ]] 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "crypto_ram",' ' "aliases": [' ' "537db023-938a-596a-9ab8-e2b72c6a288a"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "537db023-938a-596a-9ab8-e2b72c6a288a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc0",' ' "name": "crypto_ram",' ' "key_name": "test_dek_qat_cbc"' ' }' ' }' '}' '{' ' "name": "crypto_ram1",' ' "aliases": [' ' "137976db-ab79-5571-b56a-9062baf2ea18"' ' ],' ' "product_name": "crypto",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "137976db-ab79-5571-b56a-9062baf2ea18",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc1",' ' "name": "crypto_ram1",' ' "key_name": "test_dek_qat_xts"' ' }' ' }' '}' '{' ' "name": "crypto_ram2",' ' "aliases": [' ' "f10d28a0-dea7-57ab-a564-7b1b64fdea7d"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "f10d28a0-dea7-57ab-a564-7b1b64fdea7d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc2",' ' "name": "crypto_ram2",' ' "key_name": "test_dek_qat_cbc2"' ' }' ' }' '}' '{' ' "name": "crypto_ram3",' ' "aliases": [' ' "84f55873-803a-5aee-9d96-da8c7ca39489"' ' ],' ' "product_name": "crypto",' ' "block_size": 4096,' ' "num_blocks": 8192,' ' "uuid": "84f55873-803a-5aee-9d96-da8c7ca39489",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "crypto": {' ' "base_bdev_name": "Malloc3",' ' "name": "crypto_ram3",' ' "key_name": "test_dek_qat_xts2"' ' }' ' }' '}' 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram]' 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram1]' 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram1 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram2]' 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram2 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_crypto_ram3]' 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=crypto_ram3 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:43:48.232 ************************************ 00:43:48.232 START TEST bdev_fio_trim 00:43:48.232 ************************************ 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:48.232 11:25:55 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:43:48.834 job_crypto_ram: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:48.834 job_crypto_ram1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:48.834 job_crypto_ram2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:48.834 job_crypto_ram3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:48.834 fio-3.35 00:43:48.834 Starting 4 threads 00:43:48.834 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.834 EAL: Requested device 0000:3d:01.0 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:01.1 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:01.2 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:01.3 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:01.4 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:01.5 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:01.6 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:01.7 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:02.0 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:02.1 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:02.2 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:02.3 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:02.4 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:02.5 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:02.6 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3d:02.7 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:01.0 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:01.1 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:01.2 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:01.3 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:01.4 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:01.5 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:01.6 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:01.7 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:02.0 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:02.1 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:02.2 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:02.3 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:02.4 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:02.5 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:02.6 cannot be used 00:43:48.835 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:43:48.835 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:03.712 00:44:03.712 job_crypto_ram: (groupid=0, jobs=4): err= 0: pid=3868416: Thu Jul 25 11:26:09 2024 00:44:03.712 write: IOPS=38.1k, BW=149MiB/s (156MB/s)(1487MiB/10001msec); 0 zone resets 00:44:03.712 slat (usec): min=19, max=487, avg=61.34, stdev=30.66 00:44:03.712 clat (usec): min=23, max=1133, avg=221.36, stdev=120.23 00:44:03.712 lat (usec): min=72, max=1406, avg=282.71, stdev=135.16 00:44:03.712 clat percentiles (usec): 00:44:03.712 | 50.000th=[ 200], 99.000th=[ 586], 99.900th=[ 685], 99.990th=[ 766], 00:44:03.712 | 99.999th=[ 1106] 00:44:03.712 bw ( KiB/s): min=139712, max=204896, per=100.00%, avg=152907.79, stdev=4629.98, samples=76 00:44:03.712 iops : min=34928, max=51224, avg=38226.95, stdev=1157.49, samples=76 00:44:03.712 trim: IOPS=38.1k, BW=149MiB/s (156MB/s)(1487MiB/10001msec); 0 zone resets 00:44:03.712 slat (usec): min=6, max=415, avg=16.96, stdev= 6.53 00:44:03.712 clat (usec): min=72, max=1408, avg=282.91, stdev=135.17 00:44:03.712 lat (usec): min=79, max=1464, avg=299.87, stdev=136.77 00:44:03.712 clat percentiles (usec): 00:44:03.712 | 50.000th=[ 258], 99.000th=[ 693], 99.900th=[ 799], 99.990th=[ 898], 00:44:03.712 | 99.999th=[ 1336] 00:44:03.712 bw ( KiB/s): min=139712, max=204896, per=100.00%, avg=152907.79, stdev=4629.98, samples=76 00:44:03.712 iops : min=34928, max=51224, avg=38226.95, stdev=1157.49, samples=76 00:44:03.712 lat (usec) : 50=0.05%, 100=8.44%, 250=48.26%, 500=37.59%, 750=5.49% 00:44:03.712 lat (usec) : 1000=0.17% 00:44:03.712 lat (msec) : 2=0.01% 00:44:03.712 cpu : usr=99.53%, sys=0.06%, ctx=99, majf=0, minf=7682 00:44:03.712 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:03.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:03.712 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:03.712 issued rwts: total=0,380752,380752,0 short=0,0,0,0 dropped=0,0,0,0 00:44:03.712 latency : target=0, window=0, percentile=100.00%, depth=8 00:44:03.712 00:44:03.712 Run status group 0 (all jobs): 00:44:03.712 WRITE: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=1487MiB (1560MB), run=10001-10001msec 00:44:03.712 TRIM: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=1487MiB (1560MB), run=10001-10001msec 00:44:05.612 ----------------------------------------------------- 00:44:05.612 Suppressions used: 00:44:05.612 count bytes template 00:44:05.612 4 47 /usr/src/fio/parse.c 00:44:05.612 1 8 libtcmalloc_minimal.so 00:44:05.612 1 904 libcrypto.so 00:44:05.612 ----------------------------------------------------- 00:44:05.612 00:44:05.612 00:44:05.612 real 0m17.187s 00:44:05.612 user 0m57.126s 00:44:05.612 sys 0m0.905s 00:44:05.612 11:26:12 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:05.612 11:26:12 blockdev_crypto_qat.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:44:05.612 ************************************ 00:44:05.612 END TEST bdev_fio_trim 00:44:05.612 ************************************ 00:44:05.612 11:26:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:44:05.612 11:26:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:44:05.612 11:26:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:44:05.612 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:44:05.612 11:26:12 blockdev_crypto_qat.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:44:05.612 00:44:05.612 real 0m34.717s 00:44:05.612 user 1m54.401s 00:44:05.612 sys 0m1.980s 00:44:05.612 11:26:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:05.612 11:26:12 blockdev_crypto_qat.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:44:05.612 ************************************ 00:44:05.612 END TEST bdev_fio 00:44:05.612 ************************************ 00:44:05.612 11:26:12 blockdev_crypto_qat -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:44:05.612 11:26:12 blockdev_crypto_qat -- bdev/blockdev.sh@776 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:05.612 11:26:12 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:44:05.612 11:26:12 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:05.612 11:26:12 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:05.612 ************************************ 00:44:05.612 START TEST bdev_verify 00:44:05.612 ************************************ 00:44:05.612 11:26:12 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:44:05.612 [2024-07-25 11:26:12.655664] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:44:05.612 [2024-07-25 11:26:12.655780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3870461 ] 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:05.870 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.870 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:05.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.871 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:05.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.871 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:05.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.871 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:05.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.871 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:05.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.871 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:05.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.871 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:05.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.871 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:05.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.871 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:05.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.871 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:05.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.871 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:05.871 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:05.871 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:05.871 [2024-07-25 11:26:12.881560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:06.129 [2024-07-25 11:26:13.167710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:06.129 [2024-07-25 11:26:13.167717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:06.129 [2024-07-25 11:26:13.189550] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:44:06.129 [2024-07-25 11:26:13.197579] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:44:06.129 [2024-07-25 11:26:13.205584] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:44:06.694 [2024-07-25 11:26:13.581354] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:44:10.008 [2024-07-25 11:26:16.416399] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:44:10.008 [2024-07-25 11:26:16.416466] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:44:10.008 [2024-07-25 11:26:16.416488] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:10.008 [2024-07-25 11:26:16.424412] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:44:10.008 [2024-07-25 11:26:16.424448] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:44:10.008 [2024-07-25 11:26:16.424464] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:10.008 [2024-07-25 11:26:16.432452] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:44:10.008 [2024-07-25 11:26:16.432486] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:44:10.008 [2024-07-25 11:26:16.432501] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:10.008 [2024-07-25 11:26:16.440470] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:44:10.008 [2024-07-25 11:26:16.440502] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:44:10.008 [2024-07-25 11:26:16.440516] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:10.008 Running I/O for 5 seconds... 00:44:15.266 00:44:15.266 Latency(us) 00:44:15.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:15.266 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.266 Verification LBA range: start 0x0 length 0x1000 00:44:15.266 crypto_ram : 5.06 455.70 1.78 0.00 0.00 280187.65 14155.78 185388.24 00:44:15.266 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.266 Verification LBA range: start 0x1000 length 0x1000 00:44:15.266 crypto_ram : 5.06 455.08 1.78 0.00 0.00 280280.54 14994.64 185388.24 00:44:15.266 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.266 Verification LBA range: start 0x0 length 0x1000 00:44:15.266 crypto_ram1 : 5.06 455.59 1.78 0.00 0.00 279400.54 11586.76 174483.05 00:44:15.266 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.266 Verification LBA range: start 0x1000 length 0x1000 00:44:15.266 crypto_ram1 : 5.06 454.98 1.78 0.00 0.00 279476.54 11744.05 173644.19 00:44:15.266 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.266 Verification LBA range: start 0x0 length 0x1000 00:44:15.266 crypto_ram2 : 5.05 3574.73 13.96 0.00 0.00 35520.16 4299.16 29150.41 00:44:15.266 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.266 Verification LBA range: start 0x1000 length 0x1000 00:44:15.266 crypto_ram2 : 5.05 3576.81 13.97 0.00 0.00 35481.55 8126.46 28940.70 00:44:15.266 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:44:15.266 Verification LBA range: start 0x0 length 0x1000 00:44:15.266 crypto_ram3 : 5.05 3573.65 13.96 0.00 0.00 35429.14 3853.52 28730.98 00:44:15.266 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:44:15.266 Verification LBA range: start 0x1000 length 0x1000 00:44:15.266 crypto_ram3 : 5.06 3594.31 14.04 0.00 0.00 35212.19 2254.44 28940.70 00:44:15.266 =================================================================================================================== 00:44:15.266 Total : 16140.83 63.05 0.00 0.00 63037.83 2254.44 185388.24 00:44:17.793 00:44:17.793 real 0m11.994s 00:44:17.793 user 0m22.121s 00:44:17.793 sys 0m0.542s 00:44:17.793 11:26:24 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:17.793 11:26:24 blockdev_crypto_qat.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:44:17.793 ************************************ 00:44:17.793 END TEST bdev_verify 00:44:17.793 ************************************ 00:44:17.793 11:26:24 blockdev_crypto_qat -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:17.793 11:26:24 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:44:17.793 11:26:24 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:17.793 11:26:24 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:17.793 ************************************ 00:44:17.793 START TEST bdev_verify_big_io 00:44:17.793 ************************************ 00:44:17.793 11:26:24 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:44:17.793 [2024-07-25 11:26:24.827826] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:44:17.793 [2024-07-25 11:26:24.828082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3872478 ] 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:18.051 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:18.051 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:18.310 [2024-07-25 11:26:25.202859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:18.568 [2024-07-25 11:26:25.489275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:18.568 [2024-07-25 11:26:25.489279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:18.568 [2024-07-25 11:26:25.511103] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:44:18.568 [2024-07-25 11:26:25.519131] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:44:18.568 [2024-07-25 11:26:25.527146] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:44:18.825 [2024-07-25 11:26:25.904460] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:44:22.104 [2024-07-25 11:26:28.768721] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:44:22.104 [2024-07-25 11:26:28.768791] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:44:22.104 [2024-07-25 11:26:28.768813] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:22.104 [2024-07-25 11:26:28.776736] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:44:22.104 [2024-07-25 11:26:28.776771] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:44:22.104 [2024-07-25 11:26:28.776787] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:22.104 [2024-07-25 11:26:28.784780] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:44:22.104 [2024-07-25 11:26:28.784811] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:44:22.104 [2024-07-25 11:26:28.784826] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:22.104 [2024-07-25 11:26:28.792782] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:44:22.104 [2024-07-25 11:26:28.792813] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:44:22.104 [2024-07-25 11:26:28.792827] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:22.104 Running I/O for 5 seconds... 00:44:23.039 [2024-07-25 11:26:29.957550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.958251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.958332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.958383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.958434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.958483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.958948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.958975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.962495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.962588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.962654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.962702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.963170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.963220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.963270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.963315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.963747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.963768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.967159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.967226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.967285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.967332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.967823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.967873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.967920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.967967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.968415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.968438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.971750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.971808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.971858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.971904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.972381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.972436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.972484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.972533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.972950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.972972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.976324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.976382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.976429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.976474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.976946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.976997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.977042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.977087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.977500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.977521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.980829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.980887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.980936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.980981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.981419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.981469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.981517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.981561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.981993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.982014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.985148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.985210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.985256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.985302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.985770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.985827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.985873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.985921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.986415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.986441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.989595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.989653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.989697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.989742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.990224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.990275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.990320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.990366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.990801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.990822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.993963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.994022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.994067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.994112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.994598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.994648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.994694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.994740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.995102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.995123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.998368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.998426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.039 [2024-07-25 11:26:29.998471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:29.998521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:29.998979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:29.999043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:29.999102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:29.999169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:29.999637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:29.999658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.002807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.002866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.002912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.002961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.003408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.003460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.003505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.003567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.003935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.003955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.007233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.007306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.007363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.007411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.007848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.007898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.007948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.007994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.008421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.008442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.011541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.011599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.011645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.011706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.012125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.012183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.012242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.012290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.012700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.012728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.015800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.015861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.015908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.015952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.016441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.016492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.016541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.016586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.017016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.017039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.020097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.020165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.020211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.020256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.020750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.020802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.020848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.020894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.021323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.021345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.024480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.024540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.024587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.024633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.025091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.025148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.025194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.025243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.025649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.025673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.028640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.028700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.028746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.028791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.029280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.029331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.029380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.029425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.029863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.029884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.032943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.033002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.033048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.033095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.033564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.033615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.033660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.033705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.034147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.034170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.036974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.037033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.037082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.037128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.040 [2024-07-25 11:26:30.037615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.037665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.037710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.037756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.038187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.038210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.041132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.041196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.041242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.041289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.041772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.041823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.041869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.041919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.042328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.042351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.045222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.045288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.045334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.045379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.045856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.045906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.045977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.046022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.046413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.046434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.049346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.049405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.049452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.049497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.049937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.049989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.050035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.050079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.050453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.050473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.053313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.053379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.053438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.053483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.054005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.054066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.054135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.054199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.054632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.054652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.057616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.057684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.057730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.057776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.058196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.058248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.058295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.058339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.058771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.058792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.061643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.061722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.061770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.061843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.062409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.062460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.062505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.062549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.062962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.062985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.065768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.065840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.065886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.065931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.066437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.066489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.066535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.066580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.067006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.067027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.069769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.069827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.069873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.069917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.070392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.070443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.070489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.070537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.070922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.070943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.073746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.073805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.073851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.073896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.074381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.074432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.041 [2024-07-25 11:26:30.074477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.074524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.074979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.075003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.077624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.077683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.077733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.077780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.078240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.078290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.078334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.078380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.078815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.078837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.081524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.081593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.081640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.081684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.082171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.082221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.082267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.082313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.082764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.082785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.084872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.084930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.084974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.085019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.085422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.085473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.085518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.085562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.085872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.085893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.088454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.088513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.088563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.088608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.089060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.089110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.089164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.089211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.089524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.089544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.093313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.094833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.096232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.096630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.097461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.097872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.098273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.099449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.099802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.099821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.103396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.104923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.105416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.105815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.106659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.107058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.107655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.108910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.109241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.109263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.112843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.113871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.114275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.114677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.115468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.115870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.117532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.118974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.119302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.119323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.123061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.123471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.123866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.124265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.125123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.126237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.127457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.128962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.129291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.129311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.131952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.132366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.132761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.133161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.134121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.135428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.136929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.138436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.042 [2024-07-25 11:26:30.138760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.138781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.141177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.141582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.141976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.142381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.144334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.145645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.147127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.148621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.149003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.149024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.151427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.151833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.152248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.152644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.043 [2024-07-25 11:26:30.154331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.155852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.157412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.158884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.159295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.159316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.161769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.162179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.162582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.162976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.164476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.165955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.167446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.167968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.168297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.168332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.170850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.171263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.171657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.172751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.174627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.176135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.177204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.178654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.179003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.179023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.181614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.182018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.182548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.183856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.185847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.187509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.188406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.189642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.189965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.189985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.192720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.193125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.194715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.196081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.197945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.198491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.199921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.201493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.201815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.201837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.204627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.205790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.207029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.208535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.209873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.211394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.212700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.214197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.214521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.214541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.217727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.218978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.220484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.221994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.223290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.224527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.226111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.227630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.228016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.228035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.232529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.234198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.235904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.237487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.239116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.240613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.242106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.243316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.243736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.243756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.247413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.248945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.250434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.251120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.252719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.254239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.255759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.256166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.256617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.256638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.260511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.262025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.263248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.264571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.266454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.303 [2024-07-25 11:26:30.267969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.268787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.269204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.269653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.269674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.273394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.275048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.275802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.277042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.278909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.280284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.280679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.281072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.281485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.281506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.285122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.285892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.287560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.288875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.290724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.291390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.291791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.292192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.292604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.292624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.296320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.297165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.298403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.299907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.301522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.301923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.302321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.302716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.303160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.303183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.305701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.307110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.308676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.310190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.310906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.311317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.311712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.312111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.312572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.312595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.315929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.317275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.318786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.320326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.321159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.321558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.321969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.322369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.322694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.322714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.325856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.327348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.328841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.329670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.330540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.331099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.331502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.331896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.332225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.332247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.335326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.336839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.338336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.338736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.339574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.339973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.340373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.341717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.342062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.342082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.345515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.347029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.348024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.348428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.349264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.349662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.350404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.351645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.351969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.351988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.355570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.357114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.357518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.357914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.358712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.359112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.304 [2024-07-25 11:26:30.360698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.362359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.362684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.362704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.366124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.366647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.367043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.367441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.368305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.369608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.370811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.372322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.372644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.372664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.375668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.376073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.376473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.376876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.378144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.379366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.380871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.382381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.382755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.382776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.384907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.385321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.385726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.386124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.388162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.389570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.391079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.392690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.393180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.393202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.395405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.395810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.396211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.396605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.398164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.399590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.400570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.401759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.402122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.402146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.404621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.405026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.405432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.405832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.406692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.407090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.407490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.407888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.408329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.408351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.411117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.411533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.411931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.411975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.412824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.413231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.413630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.414040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.414475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.414496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.417311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.417733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.418131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.418530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.418581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.419005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.305 [2024-07-25 11:26:30.419419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.419818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.420238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.420651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.421091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.421112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.423433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.423504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.423561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.423606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.424013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.424071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.424117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.424175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.424220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.424663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.424683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.426934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.426993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.427038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.427093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.427562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.427622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.427669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.427715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.427761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.428206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.428228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.430568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.430626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.430671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.430715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.431137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.431202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.431248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.431293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.431338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.431752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.431772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.566 [2024-07-25 11:26:30.434063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.434123] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.434176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.434223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.434659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.434719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.434765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.434810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.434854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.435274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.435296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.437610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.437668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.437714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.437759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.438170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.438237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.438283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.438329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.438374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.438796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.438817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.441221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.441279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.441324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.441368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.441797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.441857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.441904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.441950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.441995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.442392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.442414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.444724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.444781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.444832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.444878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.445314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.445373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.445419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.445481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.445538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.445915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.445935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.448286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.448344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.448396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.448442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.448853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.448922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.448980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.449026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.449072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.449428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.449449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.451777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.451835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.451881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.451941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.452362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.452437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.452497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.452542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.452600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.452987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.453011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.455443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.455523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.455581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.455626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.456004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.456073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.567 [2024-07-25 11:26:30.456119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.456171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.456217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.456615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.456635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.459021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.459080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.459131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.459194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.459571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.459645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.459694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.459739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.459783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.460229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.460251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.462596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.462665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.462716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.462774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.463208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.463268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.463316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.463361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.463410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.463855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.463876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.466144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.466214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.466260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.466305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.466761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.466826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.466873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.466918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.466964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.467353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.467375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.469590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.469647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.469691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.469737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.470174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.470234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.470281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.470326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.470382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.470826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.470849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.473119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.473183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.473229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.473274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.473682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.473745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.473790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.473835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.473879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.474319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.474341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.476622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.476680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.476731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.476778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.477224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.477286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.477336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.477382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.477428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.477852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.477871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.480171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.480228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.480283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.480328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.480737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.480800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.480847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.480892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.480938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.481329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.481350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.483661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.483719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.483765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.483815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.484274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.484339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.484397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.484447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.484512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.568 [2024-07-25 11:26:30.484976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.484996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.487408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.487466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.487512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.487558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.487930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.487998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.488044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.488091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.488135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.488510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.488531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.490914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.490972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.491036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.491081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.491548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.491618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.491686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.491745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.491804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.492191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.492212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.494540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.494611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.494658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.494702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.495080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.495159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.495207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.495252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.495310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.495756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.495777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.498127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.498201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.498250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.498312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.498834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.498901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.498947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.498992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.499036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.499470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.499492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.501785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.501843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.501889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.501934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.502367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.502431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.502477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.502522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.502567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.502999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.503032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.505195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.505253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.505302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.505347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.505693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.505760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.505807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.505865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.505910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.506364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.506386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.508618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.508680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.508725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.508769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.509082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.509157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.509203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.509248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.509292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.509600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.509620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.511480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.511538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.569 [2024-07-25 11:26:30.511583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.511628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.512008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.512072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.512119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.512177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.512222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.512629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.512649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.514802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.514859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.514904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.514948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.515265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.515330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.515375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.515425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.515474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.515781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.515801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.517598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.517656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.517716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.517762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.518171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.518232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.518278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.518323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.518368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.518787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.518808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.520939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.520996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.521050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.521099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.521419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.521487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.521533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.521578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.521622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.521987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.522006] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.523760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.523818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.523864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.523909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.524372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.524432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.524479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.524525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.524569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.525004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.525025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.527242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.527298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.528827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.528878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.529224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.529290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.529336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.529385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.529429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.529737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.529757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.531706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.531763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.531814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.532217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.532646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.532708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.532754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.532799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.532844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.533291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.533312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.536248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.537479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.570 [2024-07-25 11:26:30.538997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.540496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.540892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.541324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.541722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.542117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.542519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.542846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.542866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.545933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.547452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.548961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.550055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.550491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.550903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.551304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.551698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.552615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.552950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.552974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.556235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.557852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.559571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.559968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.560420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.560828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.561241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.561633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.563155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.563472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.563492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.566764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.568263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.568954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.569355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.569788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.570198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.570593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.571884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.573094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.573417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.573438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.576781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.577997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.578401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.578798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.579219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.579624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.580445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.581684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.583198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.583515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.583535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.586909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.587336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.587731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.588125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.588596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.589005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.590520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.592238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.593837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.594158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.594180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.596698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.597104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.597504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.597915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.598352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.599673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.600895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.602402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.603897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.604315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.604337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.571 [2024-07-25 11:26:30.606479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.606890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.607291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.607687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.608069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.609307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.610814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.612326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.613396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.613716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.613737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.615995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.616407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.616802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.617299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.617617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.619222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.620884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.622535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.623439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.623779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.623800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.626156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.626561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.626955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.628442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.628794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.630348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.631845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.632470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.633966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.634290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.634312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.636746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.637160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.638112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.639348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.639668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.641254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.642269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.643818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.645156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.645476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.645498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.648016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.648516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.649870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.651404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.651724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.653371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.654312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.655550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.657089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.657411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.657433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.660050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.661623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.662984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.664475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.664795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.665412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.666873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.668499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.670022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.670347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.670371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.673538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.674761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.676265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.677717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.678070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.679439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.572 [2024-07-25 11:26:30.680606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.681969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.683423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.683840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.683862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.687753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.689154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.690673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.692286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.692823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.694211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.695817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.697355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.698826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.699237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.699258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.702697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.704204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.705692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.706664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.706984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.708233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.709730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.711206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.711752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.712233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.712255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.715968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.717673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.719318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.720226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.720565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.722120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.723622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.724811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.725214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.725645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.725669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.729207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.730716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.731521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.733247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.733568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.735119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.736634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.737035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.737448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.737887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.737908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.741548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.742967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.744111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.745344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.745666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.747219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.748194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.748594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.748989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.749424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.749445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.752915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.753479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.754784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.756291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.756611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.758210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.833 [2024-07-25 11:26:30.758606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.759001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.759405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.759852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.759874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.762741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.764336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.765735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.767239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.767558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.768095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.768498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.768889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.769290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.769727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.769748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.772323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.773575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.775081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.776590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.776960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.777379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.777774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.778172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.778565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.778934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.778954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.782392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.784114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.785726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.787421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.787830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.788249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.788644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.789038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.789437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.789754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.789774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.792815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.794352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.795849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.796530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.796981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.797401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.797797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.798197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.799504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.799860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.799880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.802894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.804407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.805939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.806599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.807050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.807466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.807861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.808262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.809583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.809926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.809946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.813326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.814835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.816162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.816558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.816990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.817405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.817800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.818204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.819821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.820159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.820180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.823119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.823533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.823928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.824328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.824759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.825177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.825579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.825975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.826377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.826780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.826804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.829425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.829833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.830238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.830636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.834 [2024-07-25 11:26:30.831036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.831457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.831855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.832255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.832670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.833112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.833134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.835810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.836224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.836622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.837022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.837445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.837857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.838260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.838654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.839049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.839464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.839486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.842293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.842704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.843105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.843504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.843937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.844350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.844744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.845150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.845564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.846015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.846035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.848729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.849137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.849237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.849629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.850076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.850492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.850888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.851295] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.851693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.852131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.852160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.854785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.855198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.855592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.855641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.856088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.856508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.856910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.857315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.857713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.858119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.858146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.860438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.860495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.860540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.860584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.861016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.861075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.861125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.861179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.861223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.861650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.861671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.863927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.863984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.864030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.864074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.864517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.864578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.864624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.864669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.864714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.865146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.865169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.867447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.867505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.867551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.867596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.868030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.868088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.868135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.868187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.868232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.868656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.868677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.871005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.871073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.871120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.871176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.871632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.871690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.871736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.835 [2024-07-25 11:26:30.871781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.871826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.872232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.872254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.874538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.874595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.874639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.874685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.875127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.875196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.875243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.875289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.875357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.875787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.875809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.878113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.878180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.878227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.878273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.878677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.878761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.878833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.878891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.878936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.879330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.879351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.881666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.881728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.881774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.881819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.882235] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.882304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.882350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.882407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.882451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.882842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.882862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.885409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.885466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.885533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.885592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.885995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.886088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.886157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.886203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.886248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.886666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.886686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.888945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.889002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.889049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.889094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.889487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.889561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.889613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.889659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.889704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.890161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.890184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.892516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.892593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.892652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.892697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.893098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.893165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.893211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.893256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.893302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.893738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.893762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.895980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.896040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.896100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.896153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.896612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.896672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.896718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.896763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.896808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.897222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.897244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.899552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.899610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.899659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.899705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.900128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.900193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.900239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.900289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.900334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.900768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.836 [2024-07-25 11:26:30.900789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.903101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.903168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.903214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.903259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.903675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.903739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.903785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.903830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.903874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.904305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.904327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.906609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.906668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.906715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.906760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.907205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.907265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.907312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.907357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.907402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.907833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.907855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.910294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.910351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.910396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.910440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.910871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.910927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.910973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.911019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.911064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.911437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.911459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.913768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.913827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.913877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.913923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.914375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.914435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.914482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.914540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.914585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.915014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.915034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.917384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.917441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.917487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.917533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.917916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.917989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.918036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.918087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.918131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.918470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.918490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.920855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.920925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.920973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.921018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.921463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.921523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.921570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.921615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.921660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.922031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.922052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.924354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.924410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.924455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.924501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.924927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.924984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.925031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.925076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.837 [2024-07-25 11:26:30.925127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.925439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.925460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.927262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.927318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.927370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.927420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.927731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.927791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.927845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.927894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.927938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.928252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.928273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.930524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.930583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.930628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.930673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.931110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.931178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.931230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.931273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.931318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.931690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.931709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.933491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.933555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.933614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.933661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.933967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.934030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.934076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.934120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.934170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.934474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.934494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.936687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.936744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.936790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.936835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.937265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.937335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.937388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.937433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.937480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.937813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.937833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.939629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.939687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.939742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.939787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.940096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.940168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.940214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.940258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.940301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.940608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.940628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.942942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.943001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.943046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.943091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.943490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.943556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.943602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.943646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.943689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.944041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.944062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.945885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.945949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.945993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.946038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.946353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.946418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.946474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.946518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.946562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.946868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:23.838 [2024-07-25 11:26:30.946888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.097 [2024-07-25 11:26:30.949149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.949207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.949252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.949297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.949716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.949773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.949820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.949886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.949931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.950244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.950266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.952081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.952145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.952203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.952248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.952559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.952618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.952673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.952717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.952761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.953069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.953089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.955313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.955370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.955416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.955464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.955894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.955958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.956007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.956051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.956107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.956451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.956471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.958276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.958335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.958385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.958429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.958739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.958800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.958846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.958890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.958934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.959251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.959271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.961457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.961515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.961910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.961958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.962281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.962343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.962394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.962439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.962488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.962796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.962817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.964616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.964678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.964742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.966258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.966575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.966641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.966687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.966732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.966777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.967190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.967212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.970761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.972249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.973746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.974379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.974717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.976415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.977976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.979448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.979845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.980299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.980322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.983985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.985498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.986697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.988042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.988389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.989932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.991447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.992191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.992602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.993038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.993060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.996767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.098 [2024-07-25 11:26:30.998436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:30.999350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.000585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.000919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.002443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.003582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.003981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.004389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.004794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.004815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.008269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.009082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.010820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.012325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.012645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.014201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.014606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.015001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.015399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.015857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.015879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.019173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.020405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.021642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.023135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.023460] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.024399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.024800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.025208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.025606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.026039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.026060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.028521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.030039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.031757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.033371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.033694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.034107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.034508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.034902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.035302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.035735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.035755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.038974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.040251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.041757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.043265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.043660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.044076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.044480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.044890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.045289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.045615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.045635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.048740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.050260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.051769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.052929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.053365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.053779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.054180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.054573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.055502] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.055849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.055869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.059372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.060877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.062232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.062632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.063116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.063535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.063932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.064522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.065771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.066091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.066112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.069388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.070919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.071338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.071733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.072196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.072618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.073012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.074631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.076032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.076359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.076380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.079804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.080566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.080985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.099 [2024-07-25 11:26:31.081386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.081821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.082239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.083282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.084501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.086012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.086338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.086358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.089821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.090238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.090633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.091030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.091490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.091933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.093349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.094938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.096478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.096798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.096818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.099179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.099587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.099981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.100394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.100845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.102231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.103464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.104971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.106483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.106896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.106916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.109055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.109467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.109861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.110261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.110628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.111866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.113365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.114880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.115857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.116185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.116206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.118488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.118893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.119297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.119872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.120203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.121924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.123563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.125083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.126090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.126467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.126488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.128810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.129242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.129637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.131293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.131625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.133164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.134662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.135196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.136567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.136889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.136909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.139360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.139769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.140831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.142065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.142395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.143957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.144952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.146521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.147862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.148190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.148211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.150673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.151225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.152531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.154035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.154361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.155963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.156970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.158195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.159694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.160017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.160037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.162722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.164278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.165613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.167119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.167447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.168023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.100 [2024-07-25 11:26:31.169447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.171012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.172522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.172843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.172863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.176100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.177333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.178838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.180313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.180667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.182068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.183308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.184801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.186312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.186735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.186755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.190646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.192299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.193831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.195258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.195619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.196873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.198389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.199892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.200833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.201297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.201319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.204791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.206291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.207810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.208452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.208829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.210471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.211956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.213414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.213812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.214252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.101 [2024-07-25 11:26:31.214275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.218171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.219864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.220745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.221981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.222310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.223881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.224284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.224679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.225071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.225507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.225529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.228461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.229628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.230864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.232399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.232723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.233682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.234084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.234490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.234892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.235335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.235357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.238022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.238438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.238837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.239247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.239643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.240054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.240462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.240860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.241264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.241651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.241671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.244357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.244769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.245179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.362 [2024-07-25 11:26:31.245595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.246037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.246455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.246855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.247260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.247661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.248076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.248096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.250838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.251257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.251654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.252050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.252505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.252916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.253327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.253728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.254134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.254579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.254600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.257271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.257684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.258082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.258488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.258912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.259332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.259733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.260126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.260531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.260994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.261015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.263805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.264221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.264620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.265020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.265474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.265901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.266304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.266698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.267091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.267472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.267497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.270149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.270556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.270955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.271358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.271787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.272201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.272596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.272995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.273412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.273814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.273834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.276712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.277125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.277187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.277584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.278031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.278443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.278836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.279240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.279640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.280043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.280063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.282704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.283114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.283524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.283576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.283963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.284376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.284773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.285204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.285612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.286064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.286085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.288490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.288562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.288621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.288668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.289074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.289147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.289193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.289237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.289283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.289730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.289754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.291980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.292039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.292100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.292164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.292656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.292715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.292761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.363 [2024-07-25 11:26:31.292808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.292853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.293256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.293277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.295669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.295726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.295771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.295815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.296268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.296327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.296373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.296418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.296464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.296887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.296908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.299187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.299248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.299297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.299343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.299770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.299840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.299888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.299933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.299978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.300426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.300448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.302771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.302830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.302876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.302922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.303386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.303452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.303499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.303546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.303592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.304013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.304034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.306490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.306561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.306607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.306652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.307076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.307135] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.307188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.307233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.307279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.307668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.307689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.310073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.310131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.310183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.310228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.310660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.310718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.310778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.310823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.310893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.311290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.311312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.313595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.313661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.313712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.313757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.314179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.314249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.314306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.314355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.314401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.314755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.314775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.317244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.317302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.317360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.317405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.317803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.317880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.317928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.317986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.318050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.318421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.318443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.320880] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.320952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.321010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.321056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.321468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.321537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.321585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.321630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.364 [2024-07-25 11:26:31.321676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.322108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.322127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.324527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.324598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.324643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.324703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.325098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.325171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.325217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.325261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.325306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.325720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.325741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.327902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.327960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.328010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.328068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.328407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.328476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.328530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.328574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.328626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.329010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.329030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.331526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.331596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.331643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.331703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.332117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.332196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.332272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.332330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.332375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.332741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.332760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.335062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.335120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.335171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.335216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.335560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.335622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.335669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.335713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.335765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.336073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.336094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.338013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.338071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.338128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.338185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.338497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.338559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.338606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.338651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.338696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.339110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.339129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.341389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.341445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.341500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.341546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.341857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.341917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.341973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.342018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.342062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.342378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.342399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.344263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.344320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.344364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.344409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.344759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.344827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.344873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.344921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.344977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.345421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.345444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.347647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.347713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.347758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.347803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.348114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.348184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.348231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.348276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.348321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.348630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.365 [2024-07-25 11:26:31.348649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.350541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.350597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.350642] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.350687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.351078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.351154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.351202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.351247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.351291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.351708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.351729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.353895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.353952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.354001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.354046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.354366] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.354430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.354476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.354520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.354566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.354876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.354899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.356789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.356845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.356896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.356943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.357340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.357413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.357461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.357505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.357550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.357965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.357986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.360116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.360179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.360224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.360269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.360582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.360644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.360696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.360745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.360789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.361097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.361117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.362921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.362980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.363026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.363071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.363470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.363538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.363586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.363636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.363681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.364105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.364143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.366231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.366288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.366347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.366392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.366702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.366766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.366812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.366856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.366900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.367322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.367344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.369190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.369248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.369292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.369350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.369780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.369852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.369902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.369947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.369992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.370425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.370446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.372602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.372661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.372705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.372750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.373061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.373129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.373182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.373226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.373270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.373653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.373672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.375534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.366 [2024-07-25 11:26:31.375603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.375661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.375708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.376149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.376207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.376254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.376299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.376344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.376725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.376746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.378730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.378787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.378836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.378881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.379197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.379260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.379306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.379349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.379400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.379770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.379790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.381695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.381764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.381813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.381857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.382322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.382385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.382432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.382477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.382522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.382893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.382913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.384925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.384983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.385027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.385072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.385389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.385454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.385507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.385553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.385599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.385959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.385978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.387912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.387969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.388370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.388420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.388805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.388873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.388919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.388964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.389009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.389444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.389468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.391307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.391370] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.391416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.392579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.392920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.392983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.393030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.393075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.393119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.393435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.393455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.395925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.396492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.397765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.399280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.399597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.401197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.402269] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.403508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.405023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.405344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.367 [2024-07-25 11:26:31.405365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.408009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.409608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.410993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.412493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.412810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.413412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.414804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.416347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.417853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.418171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.418193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.421396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.422638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.424161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.425674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.426032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.427499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.428841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.430351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.431875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.432272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.432293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.436032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.437579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.439079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.440413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.440763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.442021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.443525] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.445032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.445870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.446313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.446334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.449799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.451306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.452807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.453387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.453713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.455452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.457060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.458549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.458946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.459395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.459417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.463094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.464610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.465620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.467177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.467549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.469094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.470604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.471174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.471575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.472008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.472030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.368 [2024-07-25 11:26:31.475907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.477559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.478219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.479455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.479776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.481381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.482825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.483227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.483625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.484027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.484048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.487636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.488610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.490174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.491505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.491826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.493393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.493932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.494336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.494737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.495199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.495220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.498785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.499755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.501009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.502481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.502799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.504026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.504432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.504829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.505232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.505658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.505679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.508237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.509753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.511440] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.513028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.629 [2024-07-25 11:26:31.513352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.513768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.514170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.514567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.514962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.515408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.515430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.518468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.519720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.521234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.522747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.523187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.523600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.523996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.524404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.524801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.525144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.525166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.528285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.529805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.531312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.532259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.532678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.533091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.533499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.533895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.534785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.535131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.535157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.538658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.540305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.541796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.542199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.542650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.543062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.543467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.543860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.545410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.545726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.545750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.549095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.550613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.551096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.551499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.551925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.552348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.552746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.554049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.555285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.555601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.555620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.559013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.560070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.560489] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.560886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.561306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.561718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.562591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.563821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.565323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.565640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.565660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.569052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.569474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.569872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.570277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.570701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.571111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.572571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.574201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.575711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.576029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.576050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.578339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.578746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.579146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.579548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.579980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.581429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.582660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.584166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.585660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.586102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.586122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.588340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.588747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.589152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.589550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.589920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.591169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.592681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.594190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.630 [2024-07-25 11:26:31.595241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.595560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.595580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.597837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.598254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.598653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.599178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.599499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.601159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.602853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.604492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.605411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.605754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.605774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.608122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.608542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.608941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.609985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.610311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.611860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.612708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.614037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.615556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.615874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.615894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.618629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.620082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.621714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.623220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.623535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.624013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.625384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.626917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.628423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.628740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.628760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.631475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.631885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.632298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.632695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.633110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.633526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.633920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.634329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.634740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.635183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.635204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.637969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.638387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.638786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.639191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.639623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.640032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.640449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.640852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.641254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.641694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.641715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.644469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.644879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.645293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.645689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.646067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.646485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.646885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.647289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.647693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.648129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.648157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.650836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.651248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.651644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.652056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.652449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.652870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.653272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.653669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.654066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.654455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.654476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.657166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.657576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.657977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.658383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.658826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.659244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.659645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.660038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.660450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.660872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.660893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.631 [2024-07-25 11:26:31.663629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.664038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.664445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.664841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.665228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.665637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.666035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.666449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.666850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.667326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.667348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.670048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.670471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.670870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.671275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.671719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.672128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.672534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.672934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.673340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.673752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.673773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.677431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.677841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.678281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.679681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.680119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.680575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.680978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.681386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.683100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.683552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.683574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.686262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.686672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.688216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.688619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.689077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.690168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.690913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.691316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.691714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.692154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.692177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.695237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.695645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.695697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.696091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.696528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.697164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.698360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.698759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.699243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.699558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.699580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.703289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.703698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.704144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.704198] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.704518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.704931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.705334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.705736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.706152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.706471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.706492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.708840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.708899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.708944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.708990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.709427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.709497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.709555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.709600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.709646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.710011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.710031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.712159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.712227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.712286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.712333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.712766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.712824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.712872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.712920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.712966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.713393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.713414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.715653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.715714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.715759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.632 [2024-07-25 11:26:31.715805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.716113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.716181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.716227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.716278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.716337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.716775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.716796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.718950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.719016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.719060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.719106] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.719542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.719601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.719648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.719693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.719739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.720048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.720069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.722446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.722508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.722554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.722598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.722915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.722986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.723033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.723078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.723122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.723537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.723559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.725730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.725788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.725833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.725879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.726278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.726358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.726416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.726465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.726509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.726822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.726845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.729145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.729214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.729260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.729304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.729659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.729727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.729774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.729819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.729864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.730243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.730264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.732409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.732478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.732528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.732572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.733010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.733067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.733113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.733165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.733211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.733622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.733643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.735951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.736010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.736056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.736113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.736429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.736498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.736557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.736607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.736652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.737072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.737093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.739146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.739205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.739250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.739309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.739616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.739679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.739739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.739790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.739834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.740274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.740297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.742527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.742603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.742650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.742694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.743076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.743150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.633 [2024-07-25 11:26:31.743197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.634 [2024-07-25 11:26:31.743243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.634 [2024-07-25 11:26:31.743288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.634 [2024-07-25 11:26:31.743680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.634 [2024-07-25 11:26:31.743701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.746008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.746067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.746116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.746172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.746568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.746640] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.746687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.746734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.746793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.747250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.747272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.749497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.749559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.749604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.749649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.749958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.750025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.750071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.750116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.750167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.750476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.750495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.752385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.752443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.752488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.752533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.752921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.752989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.753035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.753078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.753124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.753549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.753571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.755791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.755848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.755893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.755941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.756261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.756326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.756371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.756415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.756466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.756775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.756795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.758701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.758767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.758819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.758865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.759255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.759323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.759369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.759413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.759459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.759885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.759907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.763572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.763632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.763683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.763736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.764165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.764232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.764278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.764322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.764367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.764717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.764737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.768854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.768925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.768971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.769017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.769382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.769444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.769490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.769534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.769578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.769921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.769941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.774600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.774657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.774701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.893 [2024-07-25 11:26:31.774754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.775146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.775216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.775263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.775307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.775351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.775779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.775800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.779458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.779516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.779564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.779612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.780004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.780068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.780114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.780166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.780211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.780566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.780588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.784683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.784741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.784787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.784832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.785224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.785291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.785337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.785381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.785426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.785764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.785785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.790498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.790556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.790600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.790644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.791055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.791127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.791192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.791238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.791282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.791705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.791726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.795437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.795494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.795538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.795583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.795997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.796061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.796110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.796162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.796207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.796565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.796585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.800704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.800763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.801296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.801352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.801396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.801703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.806417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.806475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.806528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.806576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.806931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.806982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.807026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.807072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.807499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.811062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.811120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.811172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.811232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.811582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.811633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.811678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.811723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.812038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.815761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.815819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.815868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.815913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.816377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.816428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.816473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.816518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.816858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.820486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.820543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.820588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.820632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.820985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.821034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.894 [2024-07-25 11:26:31.821079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.821122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.821586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.859086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.859181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.860371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.867199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.867576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.867646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.869203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.869264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.870892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.871209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.871231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.874672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.875097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.875504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.875901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.876759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.878335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.879682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.881185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.881501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.881522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.884265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.884689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.885083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.885483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.887043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.888262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.889820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.891327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.891734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.891754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.893985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.894398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.894793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.895195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.896760] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.898284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.899795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.900683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.901000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.901020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.903347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.903750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.904150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.904988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.906908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.908403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.909761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.910934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.911279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.911300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.913763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.914180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.914576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.916095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.917942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.919503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.920214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.921469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.921784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.921804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.924455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.924860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.926448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.927821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.929673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.930262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.931725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.933350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.933666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.933686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.936343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.937618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.938839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.940344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.941596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.943285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.944776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.946368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.946684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.946704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.949978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.951210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.952712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.954211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.955831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.895 [2024-07-25 11:26:31.957067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.958570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.960045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.960446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.960468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.964285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.965819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.967324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.968573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.970169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.971676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.973186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.974029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.974479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.974501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.978167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.979761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.981463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.982297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.984201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.985708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.987009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.987408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.987833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.987858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.991548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.993071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.993621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.995013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.996884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.998581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.998984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.999386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.999791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:31.999812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:32.003392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:32.004416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:32.005963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:32.007265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:24.896 [2024-07-25 11:26:32.009075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.009835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.010237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.010631] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.011030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.011050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.014556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.015134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.016557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.018154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.020065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.020477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.020877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.021281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.021728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.021750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.024683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.025927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.027344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.028242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.029357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.029754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.030650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.031590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.032026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.032048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.034643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.155 [2024-07-25 11:26:32.036228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.037882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.039559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.040272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.040681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.041073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.041476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.041912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.041934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.044441] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.045680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.047227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.048753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.049497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.049896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.050299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.050693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.051137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.051164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.053833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.054254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.054657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.055059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.055911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.056316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.056709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.057122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.057498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.057519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.060265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.060676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.061073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.061474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.062281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.062682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.063085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.063492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.063921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.063943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.066611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.067017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.067416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.067811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.068648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.069052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.069455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.069853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.070268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.070290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.072952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.073362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.073758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.074162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.074992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.075400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.075794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.076192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.076626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.076649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.079429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.079837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.080246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.080654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.081511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.081909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.082309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.082710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.083186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.083208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.085965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.086383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.086784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.087184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.088023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.088429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.088829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.089242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.089685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.089705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.092412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.092817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.093216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.093610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.094429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.094835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.095253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.095649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.096087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.096108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.098874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.099285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.156 [2024-07-25 11:26:32.099677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.100071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.100910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.101327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.101724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.102116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.102557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.102579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.105209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.105619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.106014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.106420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.107217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.107615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.108008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.108416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.108799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.108820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.111511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.111580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.111978] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.112386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.113283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.113681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.114073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.114475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.114841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.114861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.117583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.117993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.118398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.118450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.119278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.119332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.119736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.119783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.120189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.120212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.123713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.123784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.124185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.124233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.125036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.125089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.125490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.125539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.125979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.125999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.128671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.128731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.129129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.129195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.130014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.130073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.130472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.130522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.130921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.130941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.133584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.133645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.134037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.134085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.134920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.134980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.136345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.136399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.136810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.136830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.139467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.139528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.139922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.139965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.140708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.140764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.141161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.141209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.141628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.141652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.143940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.143998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.144044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.144089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.144920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.144974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.145371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.145423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.145734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.145755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.147578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.147636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.147688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.147732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.157 [2024-07-25 11:26:32.148085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.148151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.148197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.148241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.148550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.148570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.150832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.150890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.150935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.150980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.151452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.151503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.151547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.151591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.151929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.151949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.153780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.153843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.153887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.153931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.154293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.154343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.154388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.154431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.154737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.154757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.157099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.157163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.157210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.157256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.157680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.157728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.157773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.157817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.158185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.158206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.160061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.160118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.160170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.160220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.160574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.160623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.160669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.160712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.161020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.161040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.163586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.163643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.163689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.163735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.164117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.164172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.164218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.164270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.164581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.164602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.166423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.166480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.166524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.166569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.166924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.166973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.167025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.167072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.167387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.167408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.169735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.169792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.169838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.169883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.170256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.170312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.170356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.170407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.170717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.170738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.172513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.172575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.172641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.172693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.173048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.173096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.173148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.173193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.173538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.173558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.176064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.176133] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.176186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.176231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.176614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.158 [2024-07-25 11:26:32.176667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.176711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.176756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.177064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.177084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.178957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.179017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.179061] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.179105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.179465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.179514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.179559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.179603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.180089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.180110] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.182541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.182605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.182649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.182693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.183080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.183129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.183179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.183223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.183528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.183549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.185387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.185444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.185488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.185537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.185889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.185939] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.185983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.186034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.186430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.186451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.188740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.188795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.188840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.188884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.189272] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.189322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.189367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.189411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.189719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.189739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.191593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.191649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.191699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.191743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.192095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.192151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.192204] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.192249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.192646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.192666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.194933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.194990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.195036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.195079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.195466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.195517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.195562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.195617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.195926] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.195947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.197792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.197850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.197902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.197956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.198318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.198367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.198413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.198459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.198856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.198876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.201104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.201171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.201227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.201273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.201628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.201690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.201734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.201780] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.202087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.202107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.203918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.203984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.204028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.204073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.204453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.204505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.204550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.204595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.159 [2024-07-25 11:26:32.205048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.205069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.207265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.207329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.207379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.207423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.207778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.207827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.207872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.207915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.208233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.208254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.210059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.210116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.210179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.210634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.210684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.210730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.211187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.255261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.255333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.255396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.256897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.257263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.257333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.259029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.259085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.259150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.259515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.259943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.259964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.262148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.263634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.265167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.265920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.267582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.267902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.267966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.269551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.269609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.160 [2024-07-25 11:26:32.271190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.271569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.271589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.275618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.277054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.278564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.280192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.280692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.282063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.283662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.285163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.286568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.286967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.286987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.290425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.291899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.293384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.294342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.294663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.295921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.297413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.298904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.299448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.299924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.299945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.303781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.305359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.306718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.307899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.308261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.309815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.311321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.312186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.312583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.313022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.313045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.316651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.318160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.318767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.320007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.320336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.321954] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.420 [2024-07-25 11:26:32.323418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.323815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.324217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.324606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.324627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.328130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.329146] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.330613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.331549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.331870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.333428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.334456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.334851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.335256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.335682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.335703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.339165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.339818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.341377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.343052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.343380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.344965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.345377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.345772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.346171] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.346634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.346656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.349998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.351100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.352336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.353833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.354159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.355208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.355603] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.355997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.356396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.356846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.356867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.359214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.360635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.362224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.363724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.364047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.364467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.364862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.365261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.365655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.366091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.366112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.369168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.370418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.371917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.373420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.373877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.374301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.374703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.375111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.375517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.375839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.375859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.378897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.380394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.381905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.382864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.383318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.383729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.384122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.384519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.385575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.385923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.385943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.389318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.391013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.392565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.392962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.393405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.393814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.394215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.394612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.396217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.396536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.396556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.399473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.399879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.400280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.400674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.401113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.402137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.403363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.404866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.406374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.406746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.406767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.421 [2024-07-25 11:26:32.408873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.409292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.409690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.410085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.410496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.411935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.413544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.415046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.416422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.416870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.416890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.419039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.419453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.419849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.420249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.420635] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.421044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.421450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.421843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.422241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.422668] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.422689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.425375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.425784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.426190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.426601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.427027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.427446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.427842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.428251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.428644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.429051] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.429072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.431724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.432130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.432538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.432935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.433376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.433784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.434185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.434581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.434982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.435376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.435398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.438145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.438562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.438959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.439359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.439838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.440252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.440651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.441050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.441457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.441896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.441923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.444520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.444923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.445322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.445714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.446129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.446551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.446953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.447357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.447751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.448199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.448222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.450848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.451259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.451654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.452056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.452493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.452904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.453309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.453702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.454094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.454536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.454557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.457238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.457643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.458043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.458447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.458888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.459302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.459697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.460093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.460505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.460874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.460894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.463980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.464424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.464825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.422 [2024-07-25 11:26:32.465226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.465667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.466074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.466480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.466922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.467330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.467765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.467786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.470407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.470812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.471214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.471608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.472024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.472457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.472867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.473271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.473662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.474127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.474156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.476809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.477221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.477616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.478017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.478418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.478835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.479234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.479626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.480019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.480405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.480426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.483091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.483503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.483904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.483958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.484365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.484772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.485174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.485567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.485962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.486355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.486376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.489145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.489557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.489957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.490365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.490806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.491221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.491613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.492007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.492420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.492848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.492869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.495537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.495602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.495996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.496049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.496486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.496890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.496940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.497342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.497389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.497761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.497782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.500464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.500526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.501310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.501365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.501736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.502636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.502692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.503433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.503485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.503940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.503962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.506674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.506752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.507152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.507201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.507634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.508037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.508088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.508487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.508548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.508916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.508936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.511616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.512027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.512088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.512494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.512814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.513274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.513673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.513727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.423 [2024-07-25 11:26:32.515271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.515710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.515732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.519072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.520240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.520294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.521521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.521842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.523398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.523453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.523982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.524383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.524825] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.524846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.528735] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.530345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.530398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.531577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.531920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.531986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.533476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.534930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.534982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.535359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.424 [2024-07-25 11:26:32.535380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.537456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.537513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.537565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.537611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.538050] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.538464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.538519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.539998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.540053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.540375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.540396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.542183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.542241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.542292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.542353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.542664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.542728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.542774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.542818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.542863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.543224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.543244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.545662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.545721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.545768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.545817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.546125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.546195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.546253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.546300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.546344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.546653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.546672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.548482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.548549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.548594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.548638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.548947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.549010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.549056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.549100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.549150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.549519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.549540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.551609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.551666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.551711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.551755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.552184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.552243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.552290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.552336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.552381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.552712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.552731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.554569] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.554627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.554676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.554721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.555055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.555119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.555172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.555216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.555260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.555568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.555588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.685 [2024-07-25 11:26:32.557745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.557801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.557847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.557892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.558350] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.558413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.558459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.558504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.558555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.558865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.558885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.560711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.560778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.560823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.560875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.561192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.561255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.561307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.561356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.561400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.561707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.561727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.563865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.563928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.563986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.564035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.564355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.564420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.564466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.564510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.564553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.564973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.564995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.566827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.566883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.566932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.566977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.567403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.567467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.567512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.567557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.567601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.567990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.568010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.570037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.570095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.570148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.570194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.570586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.570648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.570693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.570737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.570782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.571215] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.571244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.573005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.573068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.573116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.573168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.573510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.573571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.573617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.573660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.573704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.574023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.574049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.575900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.575957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.576002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.576047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.576481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.576540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.576587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.576632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.576689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.576997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.577030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.579247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.579304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.579353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.579398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.579706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.579770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.579816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.579860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.579911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.580291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.580312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.582089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.582153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.686 [2024-07-25 11:26:32.582216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.582262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.582706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.582769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.582816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.582862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.582907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.583321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.583343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.585383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.585439] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.585484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.585529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.585837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.585900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.585945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.585993] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.586045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.586478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.586499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.588360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.588420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.588465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.588510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.588829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.588895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.588940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.588984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.589027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.589465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.589486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.591571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.591628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.591682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.591728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.592038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.592096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.592162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.592207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.592252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.592561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.592580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.594419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.594481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.594530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.594575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.594886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.594952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.594998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.595044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.595089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.595521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.595543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.597759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.597823] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.597883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.597933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.598251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.598315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.598360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.598405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.598449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.598756] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.598775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.600596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.600656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.600700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.600745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.601090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.601160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.601207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.601252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.601301] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.601609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.601629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.603932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.603990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.604040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.604086] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.604476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.604539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.604585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.604628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.604673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.605014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.605033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.606929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.606986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.607030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.607075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.607392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.687 [2024-07-25 11:26:32.607459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.607506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.607549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.607601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.607911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.607931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.610334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.610392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.610437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.610483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.610804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.610864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.610910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.610956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.611007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.611324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.611345] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.613150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.613207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.613251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.614765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.615082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.615152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.615219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.615266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.615312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.615663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.615683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.617807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.617865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.617910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.617956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.618393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.618451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.618501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.618545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.618589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.618927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.618948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.620771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.622173] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.622255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.623921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.624245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.624343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.624712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.624787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.625159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.625557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.625578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.627612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.629136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.629195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.629839] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.630161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.630257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.631879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.631973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.633454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.633771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.633792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.635871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.636611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.636664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.637056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.637403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.637470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.638693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.638745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.640220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.640537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.640557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.642407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.642463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.643282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.643346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.643797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.643857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.643903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.644302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.644351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.644796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.644818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.648712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.648770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.650000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.650052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.650378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.650443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.651979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.652033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.652085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.688 [2024-07-25 11:26:32.652512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.652533] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.654591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.654648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.655042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.655089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.655425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.656660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.656713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.656757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.658258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.658574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.658594] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.663003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.663414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.663808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.664936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.665285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.665352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.666856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.666907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.668404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.668778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.668799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.671405] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.672312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.672714] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.673588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.673908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.674328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.674934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.676191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.677705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.678024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.678044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.683208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.683615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.684012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.684413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.684730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.686014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.687522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.689024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.689613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.689943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.689963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.692342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.692747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.693957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.694582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.695031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.696000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.697225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.698722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.700221] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.700565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.700588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.706114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.706523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.707037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.708384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.708704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.710443] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.712081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.712996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.714240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.714557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.714577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.716863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.718400] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.718803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.719203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.719519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.720599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.722161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.723669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.689 [2024-07-25 11:26:32.724262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.724582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.724605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.729052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.729847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.731070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.732544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.732865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.734216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.735401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.736608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.738089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.738417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.738438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.742244] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.742649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.743043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.744685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.745003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.746543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.748066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.748654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.749913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.750239] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.750260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.755328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.756565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.758045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.759551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.759910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.761320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.762566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.764066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.765583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.766049] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.766069] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.768486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.768970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.770356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.771938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.772267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.772934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.774167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.775591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.776962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.777372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.777392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.782246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.783155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.784290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.785445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.785767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.787333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.788664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.789733] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.790518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.790948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.790971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.794238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.795755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.797216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.798100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.798425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.690 [2024-07-25 11:26:32.799614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.801012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.802523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.803245] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.803674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.803695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.809165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.809580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.811143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.812861] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.813188] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.814782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.815193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.816901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.817303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.817745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.817767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.820283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.820696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.821089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.821490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.821924] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.822341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.822742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.824022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.951 [2024-07-25 11:26:32.824590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.825036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.825057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.829209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.829613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.830007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.830413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.830820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.831320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.832665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.833058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.833470] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.833785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.833806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.836446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.836851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.837250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.837643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.838043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.838463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.839811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.840323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.840717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.841067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.841087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.844778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.845190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.845589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.845989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.846381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.847764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.848167] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.848560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.850044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.850492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.850514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.853160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.853568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.853962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.854372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.854771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.856113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.856628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.857023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.858283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.858701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.858721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.862113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.862528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.862929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.863745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.864072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.864491] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.864962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.866358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.866752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.867192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.867213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.869852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.870268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.870670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.871070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.871398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.871827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.872229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.873548] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.874090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.874542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.874564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.877988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.878409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.879290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.880261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.880694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.881210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.882544] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.882942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.883346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.883837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.883857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.886571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.886992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.887403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.889068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.889545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.889953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.891368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.891808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.892214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.892586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.892606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.952 [2024-07-25 11:26:32.896010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.896950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.897860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.898264] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.898654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.899928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.900330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.900726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.901127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.901541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.901563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.904335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.904745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.906473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.906869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.907322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.908712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.909174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.909568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.909964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.910336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.910357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.914282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.915209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.915606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.915657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.916022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.917113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.917513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.917909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.918319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.918792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.918813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.921551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.921966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.923531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.923932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.924368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.925908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.926318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.926713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.927115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.927523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.927545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.931841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.931906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.932322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.932372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.932809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.934170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.934222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.934618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.934666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.935120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.935148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.937481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.937543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.937935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.937983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.938419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.938840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.938897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.939314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.939376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.939772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.939792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.945343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.945419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.945895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.945947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.946270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.946679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.946731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.947226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.947277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.947593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.947617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.950964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.951374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.951425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.952285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.952620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.953031] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.953675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.953727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.954964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.955287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.955307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.960504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.961798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.953 [2024-07-25 11:26:32.961849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.962247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.962663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.964148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.964200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.964591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.965233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.965562] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.965583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.968613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.970154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.970207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.971858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.972369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.972435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.973741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.974134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.974192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.974591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.974611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.979111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.979183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.979229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.979273] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.979722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.981361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.981414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.982995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.983046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.983364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.983385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.985520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.985581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.985637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.985686] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.985996] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.986058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.986105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.986156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.986200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.986622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.986645] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.990744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.990807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.990857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.990905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.991219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.991288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.991333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.991377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.991421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.991728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.991748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.993992] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.994053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.994098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.994148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.994461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.994522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.994568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.994613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.994658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.995087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.995108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.999628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.999689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.999734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:32.999778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.000085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.000155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.000202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.000246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.000290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.000598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.000618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.002750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.002806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.002856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.002905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.003296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.003358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.003404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.003449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.003494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.003923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.003944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.009015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.009072] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.009117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.954 [2024-07-25 11:26:33.009169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.009478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.009545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.009592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.009636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.009692] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.010001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.010022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.012121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.012196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.012240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.012284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.012747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.012809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.012858] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.012903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.012948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.013379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.013401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.018064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.018125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.018176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.018220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.018529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.018593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.018658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.018706] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.018750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.019056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.019075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.021150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.021206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.021258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.021310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.021781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.021837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.021884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.021929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.021976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.022387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.022408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.026514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.026571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.026623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.026671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.026979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.027047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.027093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.027136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.027187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.027538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.027557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.029593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.029656] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.029703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.029748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.030180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.030238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.030284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.030329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.030374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.030752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.030771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.034626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.034693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.034738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.034782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.035088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.035157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.035205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.035250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.035294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.035650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.035669] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.037729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.037787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.037837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.037882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.038306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.038365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.038413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.038461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.038507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.038842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.038862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.042414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.042472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.042516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.042561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.042870] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.042937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.955 [2024-07-25 11:26:33.042983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.043028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.043071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.043463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.043484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.045529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.045585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.045629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.045674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.046098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.046162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.046210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.046256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.046303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.046610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.046629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.050412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.050469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.050513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.050557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.050869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.050934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.050979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.051030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.051077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.051484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.051504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.053582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.053639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.053689] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.053734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.054169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.054227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.054274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.054337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.054382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.054688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.054708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.059207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.059265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.059309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.059361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.059670] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.059742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.059790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.059835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.059893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.060226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.060246] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.062363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.062420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.062469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.062515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.062956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.063013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.063062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.063107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.063157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.063507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:25.956 [2024-07-25 11:26:33.063527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.067951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.068009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.068054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.068099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.068418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.068482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.068535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.068581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.068627] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.068959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.068979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.071034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.071092] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.071150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.071196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.071623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.071679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.071726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.071775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.071819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.072129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.072158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.076736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.076793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.076841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.076896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.077213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.077281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.077328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.077373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.077418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.077732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.077751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.079878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.079935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.079981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.080027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.080469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.080527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.080577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.080622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.080666] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.216 [2024-07-25 11:26:33.081042] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.081062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.085845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.085904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.085959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.087413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.087781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.087844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.087890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.087933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.087981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.088417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.088437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.090610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.090667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.090717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.090761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.091073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.091137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.091193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.091237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.091280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.091590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.091610] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.096406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.097257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.097310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.098105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.098552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.098619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.099513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.099564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.100330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.100757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.100778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.102601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.103179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.103232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.104481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.104802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.104868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.106407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.106458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.107412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.107739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.107758] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.110765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.112148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.112201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.113842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.114169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.114234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.115038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.115089] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.116339] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.116659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.116679] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.120583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.120647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.121039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.121087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.121519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.121582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.121628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.122916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.122967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.123291] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.123312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.127941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.128005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.128818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.128874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.129263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.129326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.129721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.129769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.129816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.217 [2024-07-25 11:26:33.130158] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.130179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.133974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.134032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.134793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.134844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.135220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.136804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.136860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.136905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.138416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.138798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.138818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.146592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.148220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.149936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.151589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.151960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.152023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.153262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.153313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.154827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.155150] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.155170] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.158767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.160365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.161743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.163260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.163579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.164153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.165590] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.167208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.168734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.169055] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.169076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.174145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.175380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.176905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.178426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.178801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.180243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.181503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.183021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.184546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.184959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.184980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.191511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.193107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.194626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.196058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.196419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.197680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.199186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.200695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.201724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.202054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.202075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.206517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.208036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.209575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.210102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.210427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.211965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.213589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.215354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.216032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.216360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.216381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.220652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.222195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.223389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.224751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.225096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.226655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.228184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.228914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.230592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.231033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.231054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.235270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.236944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.237717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.238962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.239288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.240854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.242163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.218 [2024-07-25 11:26:33.243253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.244013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.244446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.244468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.249935] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.250768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.252507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.254040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.254369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.255911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.256319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.257746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.258144] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.258574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.258595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.264134] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.265671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.267186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.268396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.268754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.269718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.270114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.270899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.271952] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.272385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.272407] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.277936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.279478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.281120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.281734] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.282062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.282480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.282876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.284527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.284930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.285385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.285408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.291721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.293266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.293946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.295673] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.296122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.296534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.297685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.298392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.298786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.299151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.299172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.304248] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.305887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.306305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.307904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.308382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.308793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.310240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.310646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.311040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.311398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.311420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.314664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.315080] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.315496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.317002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.317448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.317859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.318964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.319717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.320111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.320504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.320526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.323715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.324130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.324538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.325172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.325492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.325904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.326304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.327891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.328297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.328729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.219 [2024-07-25 11:26:33.328754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.333564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.333971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.334376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.334779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.335166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.336696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.337098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.337495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.338905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.339341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.339368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.343822] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.344614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.345015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.345421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.345830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.346486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.529 [2024-07-25 11:26:33.347672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.348065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.348621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.348944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.348964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.352253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.353853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.354256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.354652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.355068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.355488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.356891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.357349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.357744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.358078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.358098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.361567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.362568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.363422] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.363817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.364231] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.364649] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.365159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.366495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.366901] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.367326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.367348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.371654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.372058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.373755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.374155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.374598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.375009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.530 [2024-07-25 11:26:33.375417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.376696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.377276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.377717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.377737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.382341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.382745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.383516] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.384575] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.385009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.385425] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.385828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.386247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.387917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.388409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.388431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.392589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.393046] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.393448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.394659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.395087] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.395503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.395905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.396311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.397032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.397362] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.397383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.400647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.402214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.402611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.403004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.403334] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.403761] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.404163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.404571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.404976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.405316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.405338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.409530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.410220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.411379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.411774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.412189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.413806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.414206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.414600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.414999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.415406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.415429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.420941] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.421357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.422471] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.423196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.423623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.424312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.425465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.425859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.426263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.426664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.426684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.429899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.430318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.430720] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.430781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.431103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.431600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.431995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.433177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.433833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.434286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.434308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.438841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.439252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.439651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.440060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.440512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.442176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.442568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.442959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.444557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.445003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.445024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.448957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.449024] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.449426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.449487] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.449807] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.450227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.450279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.450672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.450729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.451105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.451125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.531 [2024-07-25 11:26:33.454317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.454379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.454778] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.454847] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.455223] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.456685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.456740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.457137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.457190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.457616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.457637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.462326] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.462389] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.462781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.462831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.463156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.463573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.463625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.464016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.464075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.464507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.464528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.467764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.469294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.469352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.470865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.471191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.472792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.473793] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.473845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.475078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.475408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.475428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.479696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.480102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.480163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.481612] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.481934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.483558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.483629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.485156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.486185] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.486532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.532 [2024-07-25 11:26:33.486553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.493369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.493782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.493834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.494232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.494551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.494622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.496165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.497795] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.497868] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.498191] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.498212] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.502639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.502697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.502742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.502786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.503214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.503739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.503791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.504927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.504975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.505395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.505419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.509713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.509776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.509821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.509878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.510200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.510265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.510312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.510356] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.510401] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.510712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.510732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.513753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.513811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.513856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.513902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.514275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.514338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.514383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.514427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.514472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.514812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.514832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.519437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.519496] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.519541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.519585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.519984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.520048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.520094] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.520145] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.520190] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.520539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.520559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.523353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.523410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.523455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.523499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.523809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.523872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.523918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.523970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.524018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.524338] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.524359] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.528906] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.528965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.529018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.529064] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.529501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.529560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.533 [2024-07-25 11:26:33.529607] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.529652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.529696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.530016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.530036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.533749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.533814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.533862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.533907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.534256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.534323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.534369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.534413] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.534464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.534775] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.534794] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.539503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.539560] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.539605] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.539648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.540040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.540103] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.540155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.540201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.540247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.540661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.540682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.545815] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.545873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.545918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.545962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.546283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.546348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.534 [2024-07-25 11:26:33.546404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.546454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.546498] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.546808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.546829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.551192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.551250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.551299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.551346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.551659] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.551723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.551769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.551826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.551871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.552225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.552250] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.557008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.557068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.557113] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.557164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.557481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.557547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.557593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.557636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.557691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.558164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.535 [2024-07-25 11:26:33.558186] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.560871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.560938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.560982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.561027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.561346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.561411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.561457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.561500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.561546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.561982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.562001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.566552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.566609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.566655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.566701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.567157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.567218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.567276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.567324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.567368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.567678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.567712] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.536 [2024-07-25 11:26:33.571530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.571588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.571639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.571688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.572000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.572062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.572117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.572169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.572213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.572526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.572546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.576501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.576565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.576611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.576655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.577091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.577159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.577206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.577253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.577299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.577661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.577681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.581270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.581327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.581371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.537 [2024-07-25 11:26:33.581416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.581725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.581789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.581835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.581879] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.581923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.582453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.582474] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.587127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.587192] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.587237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.587289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.587625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.587690] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.587736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.587779] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.587826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.588136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.588163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.593237] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.593300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.593349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.593394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.593705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.593769] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.593814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.593859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.593902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.594336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.538 [2024-07-25 11:26:33.594361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.598466] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.598523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.598568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.598615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.598927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.599000] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.599053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.599099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.599151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.599484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.599504] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.603196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.603258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.603303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.603347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.603770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.603828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.603875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.603920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.603965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.604286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.604307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.608318] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.608376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.608421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.608465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.608777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.608840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.608887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.608931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.608976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.609426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.609448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.612357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.799 [2024-07-25 11:26:33.612424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.612480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.612524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.612837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.612900] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.612945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.612990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.613034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.613472] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.613494] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.618519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.618578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.618624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.618671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.619118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.619187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.619236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.619284] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.619329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.619751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.619772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.624300] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.624357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.624402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.624465] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.624774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.624842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.624891] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.624937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.624981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.625343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.625364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.628277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.628335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.628398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.629932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.630257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.630323] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.630382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.630430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.630475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.630788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.630808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.634738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.634796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.634841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.634887] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.635328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.635387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.635437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.635483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.635529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.635840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.635859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.640411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.641963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.642017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.642420] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.642862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.642928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.643328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.643377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.643767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.644194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.644216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.648675] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.650220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.650274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.652004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.652393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.652459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.652856] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.652905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.653302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.653738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.653759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.657841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.659451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.659512] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.661043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.661369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.661435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.661830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.661878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.662279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.662671] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.662691] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.666536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.666609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.668154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.668219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.800 [2024-07-25 11:26:33.668535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.668613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.668660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.670195] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.670247] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.670632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.670653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.673581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.673638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.675276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.675343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.675661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.675726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.676716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.676768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.676814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.677157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.677178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.681257] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.681315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.681934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.681984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.682328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.683873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.683927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.683972] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.685483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.685831] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.685850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.691524] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.691928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.692380] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.693763] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.694082] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.694156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.695719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.695771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.697026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.697409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.697429] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.701647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.702052] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.703751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.705313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.705641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.707201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.707828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.709076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.710602] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.710923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.710942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.715974] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.717508] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.719022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.719804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.720125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.721445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.722956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.724486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.724892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.725335] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.725357] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.730002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.731259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.732672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.733829] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.734265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.734674] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.735068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.735467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.736283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.736624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.736644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.743076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.743644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.744908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.745312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.745759] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.747386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.747788] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.748189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.749809] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.750155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.750177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.755836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.756261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.756660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.757056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.757510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.758282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.759520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.761043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.801 [2024-07-25 11:26:33.762622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.762970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.762991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.768703] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.769109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.769628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.770951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.771280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.772963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.774501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.775151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.776397] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.776717] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.776738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.781270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.781680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.782075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.782476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.782864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.783278] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.783677] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.784078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.784490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.784956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.784977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.788517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.788921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.789324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.789722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.790091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.790510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.790905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.791303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.791696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.792117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.792137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.795650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.796071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.796478] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.796878] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.797343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.797750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.798153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.798551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.798953] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.799382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.799404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.802911] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.803322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.803722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.804124] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.804531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.804945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.805348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.805742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.806143] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.806538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.806559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.810041] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.810464] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.810860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.811259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.811696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.812107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.812517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.812914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.813314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.813732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.813752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.817253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.817663] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.818065] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.818485] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.818962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.819379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.819774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.820174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.820570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.820946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.820965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.823586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.824001] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.824408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.824802] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.825210] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.825619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.826016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.826432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.826842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.827307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.827329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.829977] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.830391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.802 [2024-07-25 11:26:33.830787] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.831189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.831625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.832034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.832445] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.832841] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.833242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.833643] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.833667] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.836328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.836732] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.837128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.837529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.837956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.838376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.838773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.839175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.839570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.840045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.840066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.842748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.843164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.843568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.843966] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.844426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.844838] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.845241] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.845633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.846027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.846431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.846453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.849201] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.849619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.850015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.850416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.850798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.851211] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.851609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.852014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.852434] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.852886] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.852907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.855600] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.856007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.856408] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.856813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.857268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.858263] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.859100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.860233] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.860938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.861383] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.861403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.864121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.864535] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.864931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.865330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.865762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.866179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.866581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.866980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.867382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.867796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.867816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.870511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.870915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.871314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.871708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.872107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.872529] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.872931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.873331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.873725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.874175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.874197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.876506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.877771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.879290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.880808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.881151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.881564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.881958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.882354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.882754] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.883152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.883174] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.886721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.888331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.890013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.891678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.892059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.892477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.803 [2024-07-25 11:26:33.892873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.893277] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.893694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.894014] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.894035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.897102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.898634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.900162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.900593] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.901026] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.901452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.901864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.902268] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.903934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.904274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.904296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.907654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.909184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.910039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.910095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.910586] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.910994] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.911395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.911789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.913070] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.913458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:26.804 [2024-07-25 11:26:33.913480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.916915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.918538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.920053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.920459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.920889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.921304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.921699] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.922159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.923550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.923869] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.923889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.927200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.927265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.928777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.928832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.929265] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.929676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.929725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.930118] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.930172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.930614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.930637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.065 [2024-07-25 11:26:33.933609] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.933672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.935411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.935467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.935783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.937374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.937433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.939009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.939058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.939475] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.939497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.943035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.943097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.944595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.944647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.944960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.945585] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.945639] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.946912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.946968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.947289] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.947310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.949799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.950209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.950261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.951718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.952057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.953618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.955137] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.955193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.955738] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.956058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.956079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.958381] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.958785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.958835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.959234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.959584] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.960817] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.960871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.962402] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.963921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.964307] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.964328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.966492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.966899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.966949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.967349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.967812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.967872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.969033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.970309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.970365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.970680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.970700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.972550] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.972608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.972653] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.972698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.973009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.973629] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.973681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.974073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.974121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.974552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.974574] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.976651] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.976709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.976753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.976798] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.977109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.977179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.977226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.977271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.977322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.977702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.977722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.979557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.979615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.979660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.979709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.980152] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.980214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.980262] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.980308] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.980353] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.980753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.066 [2024-07-25 11:26:33.980774] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.982786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.982844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.982888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.982936] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.983255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.983321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.983373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.983417] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.983462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.983792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.983811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.985683] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.985740] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.985784] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.985828] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.986261] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.986321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.986368] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.986412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.986457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.986895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.986915] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.988885] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.988965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.989015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.989060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.989390] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.989454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.989509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.989553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.989604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.989913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.989933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.991848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.991908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.991956] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.992004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.992446] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.992510] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.992556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.992613] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.992658] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.993095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.993116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.995039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.995109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.995161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.995206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.995570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.995633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.995682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.995727] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.995772] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.996104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.996131] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.998083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.998149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.998196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.998242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.998672] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.998731] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.998776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.998821] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.998865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.999309] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:33.999332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.001197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.001255] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.001304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.001351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.001762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.001826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.001872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.001917] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.001962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.002331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.002352] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.004378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.004436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.004481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.004526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.004922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.004983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.005029] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.005077] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.005122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.005555] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.005578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.007358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.007423] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.067 [2024-07-25 11:26:34.007468] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.007513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.007852] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.007913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.007959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.008003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.008053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.008371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.008392] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.010418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.010476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.010521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.010567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.010981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.011038] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.011084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.011128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.011180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.011599] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.011620] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.013393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.013451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.013501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.013547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.013855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.013920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.013971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.014019] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.014067] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.014385] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.014406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.016499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.016557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.016614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.016660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.017111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.017181] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.017228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.017274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.017319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.017724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.017743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.019551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.019618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.019664] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.019709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.020062] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.020128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.020182] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.020226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.020270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.020577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.020597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.022862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.022919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.022970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.023015] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.023461] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.023519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.023566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.023611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.023657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.024004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.024023] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.025876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.025934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.025983] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.026028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.026367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.026431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.026477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.026522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.026567] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.026875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.026895] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.029085] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.029148] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.029193] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.029238] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.029681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.029743] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.029789] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.029835] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.029892] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.030209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.030230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.032063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.032120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.032172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.068 [2024-07-25 11:26:34.032220] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.032538] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.032601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.032646] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.032697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.032745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.033054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.033074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.035303] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.035360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.035406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.035452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.035905] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.035962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.036010] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.036058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.036102] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.036432] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.036453] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.038274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.038337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.038386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.038436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.038745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.038810] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.038855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.038899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.038947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.039266] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.039287] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.041596] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.041654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.041700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.041745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.042160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.042226] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.042271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.042316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.042361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.042693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.042713] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.044591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.044650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.044694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.044739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.045048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.045112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.045164] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.045209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.045253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.045561] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.045580] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.047975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.048034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.048079] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.049160] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.049506] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.049570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.049621] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.049665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.049709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.050017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.050036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.051871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.051930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.051976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.052020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.052340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.052404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.052449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.052500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.052547] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.052964] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.052985] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.055229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.056469] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.056522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.058037] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.058363] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.058433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.059175] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.069 [2024-07-25 11:26:34.059229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.060630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.060948] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.060968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.063044] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.063459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.063511] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.063902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.064252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.064315] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.065540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.065592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.067105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.067427] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.067447] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.069297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.070234] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.070286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.070680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.071120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.071187] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.071581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.071630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.072020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.072343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.072364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.074214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.074270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.075513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.075565] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.075877] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.075942] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.075988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.077520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.077571] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.078078] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.078098] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.080463] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.080527] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.082045] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.082096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.082428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.082493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.083481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.083531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.083591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.083902] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.083922] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.085975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.086033] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.086433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.086483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.086918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.087957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.088011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.088056] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.089282] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.089597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.089617] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.092951] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.094176] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.094573] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.094967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.095388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.095450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.095843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.095890] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.097032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.097373] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.097399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.100916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.102495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.103919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.104320] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.104724] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.070 [2024-07-25 11:26:34.105130] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.105532] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.106039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.107395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.107710] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.107729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.111053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.112592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.113090] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.113493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.113932] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.114348] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.114745] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.116256] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.117549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.117864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.117884] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.121337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.122804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.123209] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.123604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.124034] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.124449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.124843] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.125259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.125660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.126099] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.126120] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.128767] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.129183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.129577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.129981] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.130424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.130833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.131242] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.131637] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.132032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.132452] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.132473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.135096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.135513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.135908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.136314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.136786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.137219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.137616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.138008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.138412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.138850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.138873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.141650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.142054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.142459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.142860] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.143274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.143688] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.144081] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.144484] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.144881] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.145310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.145332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.148060] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.148481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.148897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.149319] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.149748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.150161] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.150554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.150955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.151365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.151804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.151827] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.154536] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.154943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.155347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.155742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.156179] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.156589] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.156988] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.157391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.157785] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.158206] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.158227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.160913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.161330] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.161730] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.162127] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.162568] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.071 [2024-07-25 11:26:34.162980] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.163387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.163782] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.164205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.164638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.164660] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.167372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.167776] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.168183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.168583] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.169003] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.169419] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.169816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.170216] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.170615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.171039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.171059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.173859] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.174280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.174698] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.175095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.175554] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.175973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.176375] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.176771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.177218] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.177624] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.177644] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.072 [2024-07-25 11:26:34.180342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.180750] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.181154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.181549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.182007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.182424] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.182824] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.183230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.183628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.184083] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.184105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.186676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.187088] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.187490] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.187883] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.188293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.188708] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.189107] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.189507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.189904] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.190355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.190377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.192982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.193395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.193790] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.194208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.194588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.194998] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.195403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.195796] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.333 [2024-07-25 11:26:34.196196] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.196598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.196618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.199214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.199622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.201232] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.201632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.201949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.202371] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.202768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.203166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.203559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.203997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.204017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.206618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.207025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.207435] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.207837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.208290] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.208700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.209093] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.209492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.209889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.210281] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.210304] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.212947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.213369] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.213768] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.214169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.214558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.214965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.215374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.215783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.216208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.216657] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.216678] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.220455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.222222] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.223814] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.224766] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.225126] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.226693] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.228205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.229325] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.229721] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.230172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.230194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.233792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.235316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.235874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.237298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.237618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.239274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.240987] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.241393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.241797] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.242230] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.242253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.245812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.247128] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.248391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.249626] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.249950] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.251517] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.252341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.252752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.253151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.253578] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.253598] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.256921] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.257455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.258813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.260342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.260665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.262306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.262705] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.263112] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.263520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.263982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.264004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.267066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.268500] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.269748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.271260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.271577] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.272322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.272723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.273115] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.273526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.273973] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.273995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.276430] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.334 [2024-07-25 11:26:34.277701] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.279213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.280746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.281129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.281552] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.281949] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.282351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.282749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.283162] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.283184] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.286479] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.287897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.289426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.289486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.289806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.290227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.290625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.291018] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.291418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.291849] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.291871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.294741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.295990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.297488] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.299009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.299387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.299804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.300208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.300604] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.301004] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.301379] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.301412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.304647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.304711] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.306225] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.306276] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.306595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.307894] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.307944] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.308340] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.308391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.308836] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.308857] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.312509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.312572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.314240] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.314298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.314729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.315995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.316048] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.317579] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.317630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.317946] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.317965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.320564] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.320625] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.322275] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.322333] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.322648] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.324200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.324254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.325957] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.326007] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.326393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.326414] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.328615] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.329020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.329084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.329483] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.329925] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.330748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.331975] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.332028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.333530] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.333853] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.333873] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.337172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.337582] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.337636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.338027] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.338426] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.338837] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.338889] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.339608] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.340846] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.341168] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.341189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.344486] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.346043] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.346104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.346505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.346943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.347008] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.335 [2024-07-25 11:26:34.347410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.347804] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.347851] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.348267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.348288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.350058] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.350117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.350172] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.350217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.350557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.352286] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.352341] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.353863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.353913] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.354258] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.354279] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.356522] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.356581] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.356628] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.356687] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.356997] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.357063] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.357109] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.357169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.357214] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.357526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.357545] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.359328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.359386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.359450] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.359499] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.359808] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.359872] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.359918] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.359962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.360005] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.360376] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.360399] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.362816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.362897] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.362943] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.362986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.363336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.363403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.363449] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.363493] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.363537] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.363845] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.363866] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.365739] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.365799] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.365844] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.365888] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.366207] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.366271] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.366316] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.366360] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.366404] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.366820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.366840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.369388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.369448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.369497] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.369541] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.369875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.369938] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.369984] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.370028] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.370071] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.370391] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.370412] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.372243] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.372299] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.372344] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.372388] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.372697] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.372765] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.372811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.372855] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.372899] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.373409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.373431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.375792] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.375848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.375893] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.375937] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.376322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.376387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.376433] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.376476] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.376520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.376833] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.336 [2024-07-25 11:26:34.376854] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.378685] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.378742] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.378786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.378830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.379149] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.379213] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.379267] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.379314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.379358] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.379707] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.379728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.381967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.382025] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.382075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.382119] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.382458] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.382523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.382570] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.382614] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.382665] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.382976] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.382995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.384848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.384907] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.384959] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.385009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.385327] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.385396] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.385451] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.385495] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.385540] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.385968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.385989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.388224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.388288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.388332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.388382] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.388694] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.388771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.388820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.388865] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.388909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.389227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.389249] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.391036] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.391101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.391153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.391197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.391528] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.391592] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.391638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.391682] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.391737] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.392166] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.392200] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.394444] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.394507] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.394558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.394606] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.394923] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.394986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.395032] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.395076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.395121] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.395438] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.395459] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.397253] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.397310] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.397354] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.397398] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.397753] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.397820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.397867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.397916] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.397961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.398393] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.398415] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.400641] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.400702] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.400747] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.400791] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.401100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.401183] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.401229] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.401274] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.401317] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.401630] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.401650] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.403480] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.403542] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.403588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.403634] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.337 [2024-07-25 11:26:34.404075] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.404159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.404208] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.404252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.404297] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.404725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.404749] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.406933] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.406990] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.407039] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.407084] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.407403] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.407467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.407513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.407557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.407601] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.407908] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.407928] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.409773] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.409830] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.409875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.409920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.410394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.410462] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.410509] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.410553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.410597] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.411022] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.411047] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.413197] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.413254] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.413298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.413342] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.413652] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.413715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.413764] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.413819] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.413863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.414177] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.414199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.415995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.416053] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.416105] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.416157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.416523] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.416591] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.416636] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.416680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.416725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.417156] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.417178] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.419280] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.419337] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.419386] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.419431] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.419744] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.419806] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.419867] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.419912] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.419960] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.420293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.420314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.422040] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.422104] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.422157] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.422203] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.422618] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.422680] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.422725] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.422770] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.422818] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.423260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.423283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.425343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.338 [2024-07-25 11:26:34.425406] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.425457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.425501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.425811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.425874] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.425920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.425965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.426009] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.426436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.426457] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.428194] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.428252] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.428298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.428695] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.429155] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.429224] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.429283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.429328] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.429374] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.429813] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.429832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.431752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.431816] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.431864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.431910] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.432259] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.432322] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.432367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.432411] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.432456] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.432781] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.432801] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.434969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.435384] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.435437] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.436111] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.436473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.436546] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.438074] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.438125] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.439647] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.440013] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.440035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.441786] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.442205] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.442260] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.442655] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.443054] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.443122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.443526] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.443576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.444306] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.444676] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.444696] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.446505] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.447723] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.339 [2024-07-25 11:26:34.447777] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.449169] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.449492] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.449557] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.450073] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.450122] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.450520] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.450958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.450979] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.453228] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.453285] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.454812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.454863] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.455217] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.455283] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.455329] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.457011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.457068] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.457395] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.457416] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.459514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.459572] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.459967] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.460021] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.460454] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.460514] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.461633] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.461684] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.461728] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.462076] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.462097] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.463999] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.464057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.465587] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.465638] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.465955] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.466864] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.466920] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.466969] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.467367] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.467805] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.467826] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.471559] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.472965] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.473736] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.474970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.475298] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.475364] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.476909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.476961] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.477875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.478292] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.478313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.480898] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.481312] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.481709] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.482101] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.482553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.599 [2024-07-25 11:26:34.482962] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.483372] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.483783] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.484189] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.484632] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.484654] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.487311] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.487715] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.488116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.488515] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.488929] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.489351] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.489751] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.490153] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.490551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.490970] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.490991] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.493662] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.494066] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.494473] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.494875] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.495331] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.495746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.496154] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.496551] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.496947] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.497355] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.497377] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.500096] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.500513] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.500927] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.501346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.501800] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.502219] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.502616] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.503012] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.503421] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.503840] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.503862] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.506619] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.507035] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.507448] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.507848] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.508251] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.508661] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.509059] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.509467] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.509876] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.510324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.510347] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.513020] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.513436] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.513834] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.514236] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.514681] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.515091] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.515501] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.515896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.516296] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.516726] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.516746] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.519332] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.600 [2024-07-25 11:26:34.519741] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.520136] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.520539] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.520945] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.521365] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.521762] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.522163] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.522558] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.522995] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.523017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.525752] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.526165] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.526563] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.526963] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.527394] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.527803] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.528202] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.528595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.528989] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.529387] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.529409] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.532132] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.532566] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.532968] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.533378] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.533811] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.534227] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.534622] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.535016] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.535428] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.535812] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.535832] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.538549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.538958] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.539361] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.539755] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.540199] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.540611] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.541011] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.541418] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.541820] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.542270] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.542293] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.544909] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.545321] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.545716] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.546108] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.546482] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.546896] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.547305] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.547700] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.548095] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.548549] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.548576] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.551313] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.551718] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.552114] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.552521] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.552903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.601 [2024-07-25 11:26:34.553324] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.553722] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.554116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.554518] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.554914] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.554934] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.557346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.557757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.558159] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.558556] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.558931] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.559349] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.559748] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.560151] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.560553] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.560982] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.561002] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.563623] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.564030] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.564442] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.564842] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.565294] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.565704] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.566100] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.566503] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.566903] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.567314] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.567336] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.570057] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.570481] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.570882] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.571288] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.571729] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.572147] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.572543] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.574180] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.575588] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.575919] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.575940] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.579343] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.580117] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.580534] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.580930] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.581346] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.581757] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.582771] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.584017] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.585531] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.585850] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.585871] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.589302] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.589719] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.590116] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.590519] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.590986] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.591410] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.592971] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.594595] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.596129] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.596455] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:27.602 [2024-07-25 11:26:34.596477] accel_dpdk_cryptodev.c: 468:accel_dpdk_cryptodev_task_alloc_resources: *ERROR*: Failed to get src_mbufs! 00:44:28.169 00:44:28.169 Latency(us) 00:44:28.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:28.169 Job: crypto_ram (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:28.169 Verification LBA range: start 0x0 length 0x100 00:44:28.169 crypto_ram : 5.97 42.88 2.68 0.00 0.00 2900744.60 291923.56 2469606.20 00:44:28.169 Job: crypto_ram (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:28.169 Verification LBA range: start 0x100 length 0x100 00:44:28.169 crypto_ram : 6.05 42.32 2.65 0.00 0.00 2943808.31 285212.67 2617245.70 00:44:28.169 Job: crypto_ram1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:28.169 Verification LBA range: start 0x0 length 0x100 00:44:28.169 crypto_ram1 : 5.97 42.87 2.68 0.00 0.00 2807011.74 291923.56 2295123.15 00:44:28.169 Job: crypto_ram1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:28.169 Verification LBA range: start 0x100 length 0x100 00:44:28.169 crypto_ram1 : 6.05 42.31 2.64 0.00 0.00 2850340.86 285212.67 2429340.88 00:44:28.169 Job: crypto_ram2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:28.169 Verification LBA range: start 0x0 length 0x100 00:44:28.169 crypto_ram2 : 5.55 279.04 17.44 0.00 0.00 411351.64 8283.75 617401.55 00:44:28.169 Job: crypto_ram2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:28.169 Verification LBA range: start 0x100 length 0x100 00:44:28.169 crypto_ram2 : 5.58 266.72 16.67 0.00 0.00 429942.36 56623.10 634178.76 00:44:28.169 Job: crypto_ram3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:44:28.169 Verification LBA range: start 0x0 length 0x100 00:44:28.169 crypto_ram3 : 5.68 292.59 18.29 0.00 0.00 381544.46 66270.00 489894.71 00:44:28.169 Job: crypto_ram3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:44:28.169 Verification LBA range: start 0x100 length 0x100 00:44:28.169 crypto_ram3 : 5.72 277.81 17.36 0.00 0.00 400128.62 32086.43 359032.42 00:44:28.169 =================================================================================================================== 00:44:28.169 Total : 1286.55 80.41 0.00 0.00 750958.17 8283.75 2617245.70 00:44:31.449 00:44:31.449 real 0m13.413s 00:44:31.449 user 0m24.463s 00:44:31.449 sys 0m0.772s 00:44:31.449 11:26:38 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:31.449 11:26:38 blockdev_crypto_qat.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:44:31.449 ************************************ 00:44:31.449 END TEST bdev_verify_big_io 00:44:31.449 ************************************ 00:44:31.449 11:26:38 blockdev_crypto_qat -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:31.449 11:26:38 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:44:31.449 11:26:38 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:31.449 11:26:38 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:31.449 ************************************ 00:44:31.449 START TEST bdev_write_zeroes 00:44:31.449 ************************************ 00:44:31.449 11:26:38 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:31.449 [2024-07-25 11:26:38.225249] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:44:31.449 [2024-07-25 11:26:38.225367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3874592 ] 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:31.449 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.449 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:31.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.450 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:31.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.450 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:31.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.450 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:31.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.450 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:31.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.450 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:31.450 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:31.450 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:31.450 [2024-07-25 11:26:38.450522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:31.708 [2024-07-25 11:26:38.724470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:31.708 [2024-07-25 11:26:38.746251] accel_dpdk_cryptodev.c: 223:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_qat 00:44:31.708 [2024-07-25 11:26:38.754279] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:44:31.708 [2024-07-25 11:26:38.762284] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:44:32.274 [2024-07-25 11:26:39.155283] accel_dpdk_cryptodev.c:1178:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 96 00:44:35.554 [2024-07-25 11:26:42.008918] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc" 00:44:35.555 [2024-07-25 11:26:42.009008] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:44:35.555 [2024-07-25 11:26:42.009032] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:35.555 [2024-07-25 11:26:42.016934] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts" 00:44:35.555 [2024-07-25 11:26:42.016974] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:44:35.555 [2024-07-25 11:26:42.016991] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:35.555 [2024-07-25 11:26:42.024976] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_cbc2" 00:44:35.555 [2024-07-25 11:26:42.025011] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:44:35.555 [2024-07-25 11:26:42.025027] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:35.555 [2024-07-25 11:26:42.032969] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "test_dek_qat_xts2" 00:44:35.555 [2024-07-25 11:26:42.033001] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:44:35.555 [2024-07-25 11:26:42.033016] vbdev_crypto.c: 617:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:44:35.555 Running I/O for 1 seconds... 00:44:36.489 00:44:36.489 Latency(us) 00:44:36.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:36.489 Job: crypto_ram (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:36.489 crypto_ram : 1.03 1881.36 7.35 0.00 0.00 67355.68 7235.17 82208.36 00:44:36.489 Job: crypto_ram1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:36.489 crypto_ram1 : 1.03 1894.52 7.40 0.00 0.00 66521.46 6501.17 75497.47 00:44:36.489 Job: crypto_ram2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:36.489 crypto_ram2 : 1.02 14510.16 56.68 0.00 0.00 8665.03 2634.55 11481.91 00:44:36.489 Job: crypto_ram3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:44:36.489 crypto_ram3 : 1.03 14543.75 56.81 0.00 0.00 8614.97 2634.55 9017.75 00:44:36.489 =================================================================================================================== 00:44:36.489 Total : 32829.79 128.24 0.00 0.00 15376.11 2634.55 82208.36 00:44:39.019 00:44:39.019 real 0m7.833s 00:44:39.019 user 0m7.245s 00:44:39.019 sys 0m0.523s 00:44:39.019 11:26:45 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:39.019 11:26:45 blockdev_crypto_qat.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:44:39.019 ************************************ 00:44:39.019 END TEST bdev_write_zeroes 00:44:39.019 ************************************ 00:44:39.019 11:26:45 blockdev_crypto_qat -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:39.019 11:26:45 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:44:39.019 11:26:45 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:39.019 11:26:45 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:39.019 ************************************ 00:44:39.019 START TEST bdev_json_nonenclosed 00:44:39.019 ************************************ 00:44:39.019 11:26:46 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:39.278 [2024-07-25 11:26:46.138905] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:44:39.278 [2024-07-25 11:26:46.139029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3875825 ] 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:39.278 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:39.278 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:39.278 [2024-07-25 11:26:46.366693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:39.536 [2024-07-25 11:26:46.634835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:39.536 [2024-07-25 11:26:46.634926] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:44:39.536 [2024-07-25 11:26:46.634954] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:39.536 [2024-07-25 11:26:46.634970] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:40.470 00:44:40.470 real 0m1.195s 00:44:40.470 user 0m0.915s 00:44:40.470 sys 0m0.273s 00:44:40.470 11:26:47 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:40.470 11:26:47 blockdev_crypto_qat.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:44:40.470 ************************************ 00:44:40.470 END TEST bdev_json_nonenclosed 00:44:40.470 ************************************ 00:44:40.470 11:26:47 blockdev_crypto_qat -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:40.470 11:26:47 blockdev_crypto_qat -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:44:40.470 11:26:47 blockdev_crypto_qat -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:40.470 11:26:47 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:40.470 ************************************ 00:44:40.470 START TEST bdev_json_nonarray 00:44:40.470 ************************************ 00:44:40.470 11:26:47 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:44:40.470 [2024-07-25 11:26:47.415378] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:44:40.470 [2024-07-25 11:26:47.415492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3875967 ] 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.470 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:40.470 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:40.471 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:40.471 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:40.728 [2024-07-25 11:26:47.640396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:40.986 [2024-07-25 11:26:47.905242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:40.986 [2024-07-25 11:26:47.905343] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:44:40.986 [2024-07-25 11:26:47.905369] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:44:40.986 [2024-07-25 11:26:47.905384] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:44:41.551 00:44:41.551 real 0m1.162s 00:44:41.551 user 0m0.901s 00:44:41.551 sys 0m0.254s 00:44:41.552 11:26:48 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:41.552 11:26:48 blockdev_crypto_qat.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:44:41.552 ************************************ 00:44:41.552 END TEST bdev_json_nonarray 00:44:41.552 ************************************ 00:44:41.552 11:26:48 blockdev_crypto_qat -- bdev/blockdev.sh@786 -- # [[ crypto_qat == bdev ]] 00:44:41.552 11:26:48 blockdev_crypto_qat -- bdev/blockdev.sh@793 -- # [[ crypto_qat == gpt ]] 00:44:41.552 11:26:48 blockdev_crypto_qat -- bdev/blockdev.sh@797 -- # [[ crypto_qat == crypto_sw ]] 00:44:41.552 11:26:48 blockdev_crypto_qat -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:44:41.552 11:26:48 blockdev_crypto_qat -- bdev/blockdev.sh@810 -- # cleanup 00:44:41.552 11:26:48 blockdev_crypto_qat -- bdev/blockdev.sh@23 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:44:41.552 11:26:48 blockdev_crypto_qat -- bdev/blockdev.sh@24 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:44:41.552 11:26:48 blockdev_crypto_qat -- bdev/blockdev.sh@26 -- # [[ crypto_qat == rbd ]] 00:44:41.552 11:26:48 blockdev_crypto_qat -- bdev/blockdev.sh@30 -- # [[ crypto_qat == daos ]] 00:44:41.552 11:26:48 blockdev_crypto_qat -- bdev/blockdev.sh@34 -- # [[ crypto_qat = \g\p\t ]] 00:44:41.552 11:26:48 blockdev_crypto_qat -- bdev/blockdev.sh@40 -- # [[ crypto_qat == xnvme ]] 00:44:41.552 00:44:41.552 real 1m49.036s 00:44:41.552 user 3m43.663s 00:44:41.552 sys 0m11.103s 00:44:41.552 11:26:48 blockdev_crypto_qat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:44:41.552 11:26:48 blockdev_crypto_qat -- common/autotest_common.sh@10 -- # set +x 00:44:41.552 ************************************ 00:44:41.552 END TEST blockdev_crypto_qat 00:44:41.552 ************************************ 00:44:41.552 11:26:48 -- spdk/autotest.sh@364 -- # run_test chaining /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:44:41.552 11:26:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:44:41.552 11:26:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:44:41.552 11:26:48 -- common/autotest_common.sh@10 -- # set +x 00:44:41.552 ************************************ 00:44:41.552 START TEST chaining 00:44:41.552 ************************************ 00:44:41.552 11:26:48 chaining -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/chaining.sh 00:44:41.811 * Looking for test storage... 00:44:41.811 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:44:41.811 11:26:48 chaining -- bdev/chaining.sh@14 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@7 -- # uname -s 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bef996-69be-e711-906e-00163566263e 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@18 -- # NVME_HOSTID=00bef996-69be-e711-906e-00163566263e 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:44:41.811 11:26:48 chaining -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:41.811 11:26:48 chaining -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:41.811 11:26:48 chaining -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:41.811 11:26:48 chaining -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.811 11:26:48 chaining -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.811 11:26:48 chaining -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.811 11:26:48 chaining -- paths/export.sh@5 -- # export PATH 00:44:41.811 11:26:48 chaining -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@47 -- # : 0 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@51 -- # have_pci_nics=0 00:44:41.811 11:26:48 chaining -- bdev/chaining.sh@16 -- # nqn=nqn.2016-06.io.spdk:cnode0 00:44:41.811 11:26:48 chaining -- bdev/chaining.sh@17 -- # key0=(00112233445566778899001122334455 11223344556677889900112233445500) 00:44:41.811 11:26:48 chaining -- bdev/chaining.sh@18 -- # key1=(22334455667788990011223344550011 33445566778899001122334455001122) 00:44:41.811 11:26:48 chaining -- bdev/chaining.sh@19 -- # bperfsock=/var/tmp/bperf.sock 00:44:41.811 11:26:48 chaining -- bdev/chaining.sh@20 -- # declare -A stats 00:44:41.811 11:26:48 chaining -- bdev/chaining.sh@66 -- # nvmftestinit 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:41.811 11:26:48 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:41.811 11:26:48 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:44:41.811 11:26:48 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:44:41.811 11:26:48 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@296 -- # e810=() 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@297 -- # x722=() 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@298 -- # mlx=() 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:44:51.786 Found 0000:20:00.0 (0x8086 - 0x159b) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:44:51.786 Found 0000:20:00.1 (0x8086 - 0x159b) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:44:51.786 Found net devices under 0000:20:00.0: cvl_0_0 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:44:51.786 Found net devices under 0000:20:00.1: cvl_0_1 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:44:51.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:51.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:44:51.786 00:44:51.786 --- 10.0.0.2 ping statistics --- 00:44:51.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:51.786 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:51.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:51.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:44:51.786 00:44:51.786 --- 10.0.0.1 ping statistics --- 00:44:51.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:51.786 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@422 -- # return 0 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:44:51.786 11:26:57 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:44:51.786 11:26:57 chaining -- bdev/chaining.sh@67 -- # nvmfappstart -m 0x2 00:44:51.787 11:26:57 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:44:51.787 11:26:57 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:44:51.787 11:26:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:51.787 11:26:57 chaining -- nvmf/common.sh@481 -- # nvmfpid=3880304 00:44:51.787 11:26:57 chaining -- nvmf/common.sh@482 -- # waitforlisten 3880304 00:44:51.787 11:26:57 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:44:51.787 11:26:57 chaining -- common/autotest_common.sh@831 -- # '[' -z 3880304 ']' 00:44:51.787 11:26:57 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:51.787 11:26:57 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:44:51.787 11:26:57 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:51.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:51.787 11:26:57 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:44:51.787 11:26:57 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:51.787 [2024-07-25 11:26:57.686372] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:44:51.787 [2024-07-25 11:26:57.686492] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:51.787 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:51.787 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:51.787 [2024-07-25 11:26:57.911452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:51.787 [2024-07-25 11:26:58.187776] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:51.787 [2024-07-25 11:26:58.187830] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:51.787 [2024-07-25 11:26:58.187850] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:51.787 [2024-07-25 11:26:58.187865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:51.787 [2024-07-25 11:26:58.187881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:51.787 [2024-07-25 11:26:58.187928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:44:51.787 11:26:58 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:44:51.787 11:26:58 chaining -- common/autotest_common.sh@864 -- # return 0 00:44:51.787 11:26:58 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:44:51.787 11:26:58 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:44:51.787 11:26:58 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:51.787 11:26:58 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@69 -- # mktemp 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@69 -- # input=/tmp/tmp.Ji1EiegjBg 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@69 -- # mktemp 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@69 -- # output=/tmp/tmp.oFMFJrdDK7 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@70 -- # trap 'tgtcleanup; exit 1' SIGINT SIGTERM EXIT 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@72 -- # rpc_cmd 00:44:51.787 11:26:58 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:51.787 11:26:58 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:51.787 malloc0 00:44:51.787 true 00:44:51.787 true 00:44:51.787 [2024-07-25 11:26:58.818238] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:44:51.787 crypto0 00:44:51.787 [2024-07-25 11:26:58.826257] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:44:51.787 crypto1 00:44:51.787 [2024-07-25 11:26:58.834423] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:51.787 [2024-07-25 11:26:58.850648] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:51.787 11:26:58 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@85 -- # update_stats 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:51.787 11:26:58 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:51.787 11:26:58 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:51.787 11:26:58 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:51.787 11:26:58 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=12 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:52.078 11:26:58 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.078 11:26:58 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:52.078 11:26:58 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]= 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:52.078 11:26:58 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:52.078 11:26:58 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.078 11:26:58 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:52.078 11:26:58 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:52.078 11:26:59 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:52.078 11:26:59 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:52.078 11:26:59 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@88 -- # dd if=/dev/urandom of=/tmp/tmp.Ji1EiegjBg bs=1K count=64 00:44:52.078 64+0 records in 00:44:52.078 64+0 records out 00:44:52.078 65536 bytes (66 kB, 64 KiB) copied, 0.00105747 s, 62.0 MB/s 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@89 -- # spdk_dd --if /tmp/tmp.Ji1EiegjBg --ob Nvme0n1 --bs 65536 --count 1 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@25 -- # local config 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:44:52.078 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@31 -- # config='{ 00:44:52.078 "subsystems": [ 00:44:52.078 { 00:44:52.078 "subsystem": "bdev", 00:44:52.078 "config": [ 00:44:52.078 { 00:44:52.078 "method": "bdev_nvme_attach_controller", 00:44:52.078 "params": { 00:44:52.078 "trtype": "tcp", 00:44:52.078 "adrfam": "IPv4", 00:44:52.078 "name": "Nvme0", 00:44:52.078 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:52.078 "traddr": "10.0.0.2", 00:44:52.078 "trsvcid": "4420" 00:44:52.078 } 00:44:52.078 }, 00:44:52.078 { 00:44:52.078 "method": "bdev_set_options", 00:44:52.078 "params": { 00:44:52.078 "bdev_auto_examine": false 00:44:52.078 } 00:44:52.078 } 00:44:52.078 ] 00:44:52.078 } 00:44:52.078 ] 00:44:52.078 }' 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.Ji1EiegjBg --ob Nvme0n1 --bs 65536 --count 1 00:44:52.078 11:26:59 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:44:52.078 "subsystems": [ 00:44:52.078 { 00:44:52.078 "subsystem": "bdev", 00:44:52.078 "config": [ 00:44:52.078 { 00:44:52.078 "method": "bdev_nvme_attach_controller", 00:44:52.079 "params": { 00:44:52.079 "trtype": "tcp", 00:44:52.079 "adrfam": "IPv4", 00:44:52.079 "name": "Nvme0", 00:44:52.079 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:52.079 "traddr": "10.0.0.2", 00:44:52.079 "trsvcid": "4420" 00:44:52.079 } 00:44:52.079 }, 00:44:52.079 { 00:44:52.079 "method": "bdev_set_options", 00:44:52.079 "params": { 00:44:52.079 "bdev_auto_examine": false 00:44:52.079 } 00:44:52.079 } 00:44:52.079 ] 00:44:52.079 } 00:44:52.079 ] 00:44:52.079 }' 00:44:52.079 [2024-07-25 11:26:59.187618] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:44:52.079 [2024-07-25 11:26:59.187738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880568 ] 00:44:52.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.336 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:52.336 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:52.337 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:52.337 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:52.337 [2024-07-25 11:26:59.412592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:52.595 [2024-07-25 11:26:59.687437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:55.421  Copying: 64/64 [kB] (average 15 MBps) 00:44:55.421 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@90 -- # get_stat sequence_executed 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:55.421 11:27:02 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:55.421 11:27:02 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.421 11:27:02 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@90 -- # (( 13 == stats[sequence_executed] + 1 )) 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@91 -- # get_stat executed encrypt 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:55.421 11:27:02 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@91 -- # (( 2 == stats[encrypt_executed] + 2 )) 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@92 -- # get_stat executed decrypt 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@92 -- # (( 12 == stats[decrypt_executed] )) 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@95 -- # get_stat executed copy 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@95 -- # (( 4 == stats[copy_executed] )) 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@96 -- # update_stats 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=13 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=2 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=12 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:55.422 11:27:02 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@99 -- # spdk_dd --of /tmp/tmp.oFMFJrdDK7 --ib Nvme0n1 --bs 65536 --count 1 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@25 -- # local config 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:44:55.422 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:44:55.422 11:27:02 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:44:55.680 11:27:02 chaining -- bdev/chaining.sh@31 -- # config='{ 00:44:55.680 "subsystems": [ 00:44:55.680 { 00:44:55.680 "subsystem": "bdev", 00:44:55.680 "config": [ 00:44:55.680 { 00:44:55.680 "method": "bdev_nvme_attach_controller", 00:44:55.680 "params": { 00:44:55.680 "trtype": "tcp", 00:44:55.680 "adrfam": "IPv4", 00:44:55.680 "name": "Nvme0", 00:44:55.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:55.680 "traddr": "10.0.0.2", 00:44:55.680 "trsvcid": "4420" 00:44:55.680 } 00:44:55.680 }, 00:44:55.680 { 00:44:55.680 "method": "bdev_set_options", 00:44:55.680 "params": { 00:44:55.680 "bdev_auto_examine": false 00:44:55.680 } 00:44:55.680 } 00:44:55.680 ] 00:44:55.680 } 00:44:55.680 ] 00:44:55.680 }' 00:44:55.680 11:27:02 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:44:55.680 "subsystems": [ 00:44:55.680 { 00:44:55.680 "subsystem": "bdev", 00:44:55.680 "config": [ 00:44:55.680 { 00:44:55.680 "method": "bdev_nvme_attach_controller", 00:44:55.680 "params": { 00:44:55.680 "trtype": "tcp", 00:44:55.680 "adrfam": "IPv4", 00:44:55.680 "name": "Nvme0", 00:44:55.680 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:55.680 "traddr": "10.0.0.2", 00:44:55.680 "trsvcid": "4420" 00:44:55.680 } 00:44:55.680 }, 00:44:55.680 { 00:44:55.680 "method": "bdev_set_options", 00:44:55.680 "params": { 00:44:55.680 "bdev_auto_examine": false 00:44:55.680 } 00:44:55.680 } 00:44:55.680 ] 00:44:55.680 } 00:44:55.680 ] 00:44:55.680 }' 00:44:55.680 11:27:02 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.oFMFJrdDK7 --ib Nvme0n1 --bs 65536 --count 1 00:44:55.680 [2024-07-25 11:27:02.665959] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:44:55.680 [2024-07-25 11:27:02.666071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3881309 ] 00:44:55.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.937 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:55.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.937 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:55.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.937 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:55.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.937 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:55.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.937 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:55.937 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.937 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:55.938 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:55.938 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:55.938 [2024-07-25 11:27:02.890759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:56.195 [2024-07-25 11:27:03.162888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:44:58.655  Copying: 64/64 [kB] (average 20 MBps) 00:44:58.655 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@100 -- # get_stat sequence_executed 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@39 -- # opcode= 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:44:58.655 11:27:05 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:58.655 11:27:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:58.655 11:27:05 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@100 -- # (( 14 == stats[sequence_executed] + 1 )) 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@101 -- # get_stat executed encrypt 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:44:58.655 11:27:05 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:58.655 11:27:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:58.655 11:27:05 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@101 -- # (( 2 == stats[encrypt_executed] )) 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@102 -- # get_stat executed decrypt 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:58.655 11:27:05 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:58.655 11:27:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:44:58.655 11:27:05 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@102 -- # (( 14 == stats[decrypt_executed] + 2 )) 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@103 -- # get_stat executed copy 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@39 -- # event=executed 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:44:58.655 11:27:05 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:44:58.655 11:27:05 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:44:58.655 11:27:05 chaining -- common/autotest_common.sh@10 -- # set +x 00:44:58.913 11:27:05 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:44:58.913 11:27:05 chaining -- bdev/chaining.sh@103 -- # (( 4 == stats[copy_executed] )) 00:44:58.913 11:27:05 chaining -- bdev/chaining.sh@104 -- # cmp /tmp/tmp.Ji1EiegjBg /tmp/tmp.oFMFJrdDK7 00:44:58.913 11:27:05 chaining -- bdev/chaining.sh@105 -- # spdk_dd --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:44:58.913 11:27:05 chaining -- bdev/chaining.sh@25 -- # local config 00:44:58.913 11:27:05 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:44:58.913 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:44:58.913 11:27:05 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:44:58.913 11:27:05 chaining -- bdev/chaining.sh@31 -- # config='{ 00:44:58.913 "subsystems": [ 00:44:58.913 { 00:44:58.913 "subsystem": "bdev", 00:44:58.913 "config": [ 00:44:58.913 { 00:44:58.913 "method": "bdev_nvme_attach_controller", 00:44:58.913 "params": { 00:44:58.913 "trtype": "tcp", 00:44:58.913 "adrfam": "IPv4", 00:44:58.913 "name": "Nvme0", 00:44:58.913 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:58.913 "traddr": "10.0.0.2", 00:44:58.913 "trsvcid": "4420" 00:44:58.913 } 00:44:58.913 }, 00:44:58.913 { 00:44:58.913 "method": "bdev_set_options", 00:44:58.913 "params": { 00:44:58.913 "bdev_auto_examine": false 00:44:58.913 } 00:44:58.913 } 00:44:58.913 ] 00:44:58.913 } 00:44:58.913 ] 00:44:58.913 }' 00:44:58.913 11:27:05 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /dev/zero --ob Nvme0n1 --bs 65536 --count 1 00:44:58.913 11:27:05 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:44:58.913 "subsystems": [ 00:44:58.913 { 00:44:58.913 "subsystem": "bdev", 00:44:58.914 "config": [ 00:44:58.914 { 00:44:58.914 "method": "bdev_nvme_attach_controller", 00:44:58.914 "params": { 00:44:58.914 "trtype": "tcp", 00:44:58.914 "adrfam": "IPv4", 00:44:58.914 "name": "Nvme0", 00:44:58.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:58.914 "traddr": "10.0.0.2", 00:44:58.914 "trsvcid": "4420" 00:44:58.914 } 00:44:58.914 }, 00:44:58.914 { 00:44:58.914 "method": "bdev_set_options", 00:44:58.914 "params": { 00:44:58.914 "bdev_auto_examine": false 00:44:58.914 } 00:44:58.914 } 00:44:58.914 ] 00:44:58.914 } 00:44:58.914 ] 00:44:58.914 }' 00:44:58.914 [2024-07-25 11:27:05.959889] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:44:58.914 [2024-07-25 11:27:05.960005] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882247 ] 00:44:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.171 EAL: Requested device 0000:3d:01.0 cannot be used 00:44:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.171 EAL: Requested device 0000:3d:01.1 cannot be used 00:44:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.171 EAL: Requested device 0000:3d:01.2 cannot be used 00:44:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.171 EAL: Requested device 0000:3d:01.3 cannot be used 00:44:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.171 EAL: Requested device 0000:3d:01.4 cannot be used 00:44:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.171 EAL: Requested device 0000:3d:01.5 cannot be used 00:44:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.171 EAL: Requested device 0000:3d:01.6 cannot be used 00:44:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.171 EAL: Requested device 0000:3d:01.7 cannot be used 00:44:59.171 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.171 EAL: Requested device 0000:3d:02.0 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3d:02.1 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3d:02.2 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3d:02.3 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3d:02.4 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3d:02.5 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3d:02.6 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3d:02.7 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:01.0 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:01.1 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:01.2 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:01.3 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:01.4 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:01.5 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:01.6 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:01.7 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:02.0 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:02.1 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:02.2 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:02.3 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:02.4 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:02.5 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:02.6 cannot be used 00:44:59.172 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:44:59.172 EAL: Requested device 0000:3f:02.7 cannot be used 00:44:59.172 [2024-07-25 11:27:06.186077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:59.430 [2024-07-25 11:27:06.477342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:02.267  Copying: 64/64 [kB] (average 15 MBps) 00:45:02.267 00:45:02.267 11:27:08 chaining -- bdev/chaining.sh@106 -- # update_stats 00:45:02.267 11:27:08 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:45:02.267 11:27:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:02.267 11:27:08 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:02.267 11:27:08 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:02.267 11:27:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:02.267 11:27:08 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:02.267 11:27:08 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:45:02.267 11:27:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.267 11:27:08 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:02.267 11:27:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:02.267 11:27:08 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.267 11:27:08 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=15 00:45:02.267 11:27:08 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:45:02.267 11:27:08 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:02.268 11:27:08 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:02.268 11:27:08 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:02.268 11:27:08 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:02.268 11:27:08 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:02.268 11:27:08 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:02.268 11:27:08 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:02.268 11:27:08 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.268 11:27:08 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:02.268 11:27:09 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=4 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:02.268 11:27:09 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.268 11:27:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:02.268 11:27:09 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:45:02.268 11:27:09 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:02.268 11:27:09 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:02.268 11:27:09 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@109 -- # spdk_dd --if /tmp/tmp.Ji1EiegjBg --ob Nvme0n1 --bs 4096 --count 16 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@25 -- # local config 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:45:02.268 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@31 -- # config='{ 00:45:02.268 "subsystems": [ 00:45:02.268 { 00:45:02.268 "subsystem": "bdev", 00:45:02.268 "config": [ 00:45:02.268 { 00:45:02.268 "method": "bdev_nvme_attach_controller", 00:45:02.268 "params": { 00:45:02.268 "trtype": "tcp", 00:45:02.268 "adrfam": "IPv4", 00:45:02.268 "name": "Nvme0", 00:45:02.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:02.268 "traddr": "10.0.0.2", 00:45:02.268 "trsvcid": "4420" 00:45:02.268 } 00:45:02.268 }, 00:45:02.268 { 00:45:02.268 "method": "bdev_set_options", 00:45:02.268 "params": { 00:45:02.268 "bdev_auto_examine": false 00:45:02.268 } 00:45:02.268 } 00:45:02.268 ] 00:45:02.268 } 00:45:02.268 ] 00:45:02.268 }' 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --if /tmp/tmp.Ji1EiegjBg --ob Nvme0n1 --bs 4096 --count 16 00:45:02.268 11:27:09 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:45:02.268 "subsystems": [ 00:45:02.268 { 00:45:02.268 "subsystem": "bdev", 00:45:02.268 "config": [ 00:45:02.268 { 00:45:02.268 "method": "bdev_nvme_attach_controller", 00:45:02.268 "params": { 00:45:02.268 "trtype": "tcp", 00:45:02.268 "adrfam": "IPv4", 00:45:02.268 "name": "Nvme0", 00:45:02.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:02.268 "traddr": "10.0.0.2", 00:45:02.268 "trsvcid": "4420" 00:45:02.268 } 00:45:02.268 }, 00:45:02.268 { 00:45:02.268 "method": "bdev_set_options", 00:45:02.268 "params": { 00:45:02.268 "bdev_auto_examine": false 00:45:02.268 } 00:45:02.268 } 00:45:02.268 ] 00:45:02.268 } 00:45:02.268 ] 00:45:02.268 }' 00:45:02.268 [2024-07-25 11:27:09.283516] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:45:02.268 [2024-07-25 11:27:09.283629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882804 ] 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.526 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:02.526 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.527 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:02.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.527 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:02.527 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:02.527 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:02.527 [2024-07-25 11:27:09.509365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:02.784 [2024-07-25 11:27:09.798770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:05.263  Copying: 64/64 [kB] (average 15 MBps) 00:45:05.263 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@110 -- # get_stat sequence_executed 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@110 -- # (( 31 == stats[sequence_executed] + 16 )) 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@111 -- # get_stat executed encrypt 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@111 -- # (( 36 == stats[encrypt_executed] + 32 )) 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@112 -- # get_stat executed decrypt 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@112 -- # (( 14 == stats[decrypt_executed] )) 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@113 -- # get_stat executed copy 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@113 -- # (( 4 == stats[copy_executed] )) 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@114 -- # update_stats 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@51 -- # get_stat sequence_executed 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@51 -- # stats["sequence_executed"]=31 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@52 -- # get_stat executed encrypt 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:05.263 11:27:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.263 11:27:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.522 11:27:12 chaining -- bdev/chaining.sh@52 -- # stats["encrypt_executed"]=36 00:45:05.522 11:27:12 chaining -- bdev/chaining.sh@53 -- # get_stat executed decrypt 00:45:05.522 11:27:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.522 11:27:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:05.522 11:27:12 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:05.523 11:27:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.523 11:27:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.523 11:27:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@53 -- # stats["decrypt_executed"]=14 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@54 -- # get_stat executed copy 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:05.523 11:27:12 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:05.523 11:27:12 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:45:05.523 11:27:12 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@54 -- # stats["copy_executed"]=4 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@117 -- # : 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@118 -- # spdk_dd --of /tmp/tmp.oFMFJrdDK7 --ib Nvme0n1 --bs 4096 --count 16 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@25 -- # local config 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@31 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --mode=remote --json-with-subsystems --trid=tcp:10.0.0.2:4420:nqn.2016-06.io.spdk:cnode0 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@32 -- # jq '.subsystems[0].config[.subsystems[0].config | length] |= 00:45:05.523 {"method": "bdev_set_options", "params": {"bdev_auto_examine": false}}' 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@31 -- # config='{ 00:45:05.523 "subsystems": [ 00:45:05.523 { 00:45:05.523 "subsystem": "bdev", 00:45:05.523 "config": [ 00:45:05.523 { 00:45:05.523 "method": "bdev_nvme_attach_controller", 00:45:05.523 "params": { 00:45:05.523 "trtype": "tcp", 00:45:05.523 "adrfam": "IPv4", 00:45:05.523 "name": "Nvme0", 00:45:05.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:05.523 "traddr": "10.0.0.2", 00:45:05.523 "trsvcid": "4420" 00:45:05.523 } 00:45:05.523 }, 00:45:05.523 { 00:45:05.523 "method": "bdev_set_options", 00:45:05.523 "params": { 00:45:05.523 "bdev_auto_examine": false 00:45:05.523 } 00:45:05.523 } 00:45:05.523 ] 00:45:05.523 } 00:45:05.523 ] 00:45:05.523 }' 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_dd -c /dev/fd/62 --of /tmp/tmp.oFMFJrdDK7 --ib Nvme0n1 --bs 4096 --count 16 00:45:05.523 11:27:12 chaining -- bdev/chaining.sh@33 -- # echo '{ 00:45:05.523 "subsystems": [ 00:45:05.523 { 00:45:05.523 "subsystem": "bdev", 00:45:05.523 "config": [ 00:45:05.523 { 00:45:05.523 "method": "bdev_nvme_attach_controller", 00:45:05.523 "params": { 00:45:05.523 "trtype": "tcp", 00:45:05.523 "adrfam": "IPv4", 00:45:05.523 "name": "Nvme0", 00:45:05.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:05.523 "traddr": "10.0.0.2", 00:45:05.523 "trsvcid": "4420" 00:45:05.523 } 00:45:05.523 }, 00:45:05.523 { 00:45:05.523 "method": "bdev_set_options", 00:45:05.523 "params": { 00:45:05.523 "bdev_auto_examine": false 00:45:05.523 } 00:45:05.523 } 00:45:05.523 ] 00:45:05.523 } 00:45:05.523 ] 00:45:05.523 }' 00:45:05.523 [2024-07-25 11:27:12.625105] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:45:05.523 [2024-07-25 11:27:12.625227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3883374 ] 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:05.782 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:05.782 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:05.782 [2024-07-25 11:27:12.851862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:06.041 [2024-07-25 11:27:13.119764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:08.927  Copying: 64/64 [kB] (average 492 kBps) 00:45:08.927 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@119 -- # get_stat sequence_executed 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@41 -- # rpc_cmd accel_get_stats 00:45:08.927 11:27:15 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:08.927 11:27:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:08.927 11:27:15 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@119 -- # (( 47 == stats[sequence_executed] + 16 )) 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@120 -- # get_stat executed encrypt 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:08.927 11:27:15 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:08.927 11:27:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:08.927 11:27:15 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@120 -- # (( 36 == stats[encrypt_executed] )) 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@121 -- # get_stat executed decrypt 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:08.927 11:27:15 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@121 -- # (( 46 == stats[decrypt_executed] + 32 )) 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@122 -- # get_stat executed copy 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@39 -- # opcode=copy 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_cmd 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@40 -- # [[ -z copy ]] 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@43 -- # rpc_cmd accel_get_stats 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "copy").executed' 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@122 -- # (( 4 == stats[copy_executed] )) 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@123 -- # cmp /tmp/tmp.Ji1EiegjBg /tmp/tmp.oFMFJrdDK7 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@126 -- # tgtcleanup 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@58 -- # rm -f /tmp/tmp.Ji1EiegjBg /tmp/tmp.oFMFJrdDK7 00:45:08.928 11:27:15 chaining -- bdev/chaining.sh@59 -- # nvmftestfini 00:45:08.928 11:27:15 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:08.928 11:27:15 chaining -- nvmf/common.sh@117 -- # sync 00:45:08.928 11:27:15 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:08.928 11:27:15 chaining -- nvmf/common.sh@120 -- # set +e 00:45:08.928 11:27:15 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:08.928 11:27:15 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:08.928 rmmod nvme_tcp 00:45:08.928 rmmod nvme_fabrics 00:45:08.928 rmmod nvme_keyring 00:45:08.928 11:27:15 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:08.928 11:27:15 chaining -- nvmf/common.sh@124 -- # set -e 00:45:08.928 11:27:15 chaining -- nvmf/common.sh@125 -- # return 0 00:45:08.928 11:27:15 chaining -- nvmf/common.sh@489 -- # '[' -n 3880304 ']' 00:45:08.928 11:27:15 chaining -- nvmf/common.sh@490 -- # killprocess 3880304 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@950 -- # '[' -z 3880304 ']' 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@954 -- # kill -0 3880304 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@955 -- # uname 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3880304 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3880304' 00:45:08.928 killing process with pid 3880304 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@969 -- # kill 3880304 00:45:08.928 11:27:15 chaining -- common/autotest_common.sh@974 -- # wait 3880304 00:45:10.858 11:27:17 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:10.858 11:27:17 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:10.858 11:27:17 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:10.858 11:27:17 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:10.858 11:27:17 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:10.858 11:27:17 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:10.858 11:27:17 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:10.858 11:27:17 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:12.766 11:27:19 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:45:12.766 11:27:19 chaining -- bdev/chaining.sh@129 -- # trap 'bperfcleanup; exit 1' SIGINT SIGTERM EXIT 00:45:12.766 11:27:19 chaining -- bdev/chaining.sh@132 -- # bperfpid=3884694 00:45:12.766 11:27:19 chaining -- bdev/chaining.sh@134 -- # waitforlisten 3884694 00:45:12.766 11:27:19 chaining -- bdev/chaining.sh@131 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:45:12.766 11:27:19 chaining -- common/autotest_common.sh@831 -- # '[' -z 3884694 ']' 00:45:12.766 11:27:19 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:12.766 11:27:19 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:12.766 11:27:19 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:12.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:12.766 11:27:19 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:12.766 11:27:19 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:12.766 [2024-07-25 11:27:19.880424] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:45:12.766 [2024-07-25 11:27:19.880545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884694 ] 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:13.026 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:13.026 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:13.026 [2024-07-25 11:27:20.105936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:13.285 [2024-07-25 11:27:20.368253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:13.854 11:27:20 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:13.854 11:27:20 chaining -- common/autotest_common.sh@864 -- # return 0 00:45:13.854 11:27:20 chaining -- bdev/chaining.sh@135 -- # rpc_cmd 00:45:13.854 11:27:20 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:13.854 11:27:20 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:14.424 malloc0 00:45:14.424 true 00:45:14.424 true 00:45:14.424 [2024-07-25 11:27:21.341154] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:45:14.424 crypto0 00:45:14.424 [2024-07-25 11:27:21.349192] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:45:14.424 crypto1 00:45:14.424 11:27:21 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:14.424 11:27:21 chaining -- bdev/chaining.sh@145 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:45:14.424 Running I/O for 5 seconds... 00:45:19.694 00:45:19.694 Latency(us) 00:45:19.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:19.694 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:45:19.694 Verification LBA range: start 0x0 length 0x2000 00:45:19.694 crypto1 : 5.01 11394.49 44.51 0.00 0.00 22405.57 6265.24 14470.35 00:45:19.694 =================================================================================================================== 00:45:19.694 Total : 11394.49 44.51 0.00 0.00 22405.57 6265.24 14470.35 00:45:19.694 0 00:45:19.694 11:27:26 chaining -- bdev/chaining.sh@146 -- # killprocess 3884694 00:45:19.694 11:27:26 chaining -- common/autotest_common.sh@950 -- # '[' -z 3884694 ']' 00:45:19.694 11:27:26 chaining -- common/autotest_common.sh@954 -- # kill -0 3884694 00:45:19.694 11:27:26 chaining -- common/autotest_common.sh@955 -- # uname 00:45:19.694 11:27:26 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:19.694 11:27:26 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3884694 00:45:19.694 11:27:26 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:19.694 11:27:26 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:19.694 11:27:26 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3884694' 00:45:19.694 killing process with pid 3884694 00:45:19.694 11:27:26 chaining -- common/autotest_common.sh@969 -- # kill 3884694 00:45:19.694 Received shutdown signal, test time was about 5.000000 seconds 00:45:19.694 00:45:19.694 Latency(us) 00:45:19.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:19.694 =================================================================================================================== 00:45:19.694 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:19.694 11:27:26 chaining -- common/autotest_common.sh@974 -- # wait 3884694 00:45:21.598 11:27:28 chaining -- bdev/chaining.sh@152 -- # bperfpid=3886007 00:45:21.598 11:27:28 chaining -- bdev/chaining.sh@151 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:45:21.598 11:27:28 chaining -- bdev/chaining.sh@154 -- # waitforlisten 3886007 00:45:21.598 11:27:28 chaining -- common/autotest_common.sh@831 -- # '[' -z 3886007 ']' 00:45:21.598 11:27:28 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:21.598 11:27:28 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:21.598 11:27:28 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:21.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:21.598 11:27:28 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:21.598 11:27:28 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:21.598 [2024-07-25 11:27:28.441242] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:45:21.598 [2024-07-25 11:27:28.441371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886007 ] 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:21.598 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:21.598 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:21.598 [2024-07-25 11:27:28.666571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:21.857 [2024-07-25 11:27:28.951125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:22.425 11:27:29 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:22.425 11:27:29 chaining -- common/autotest_common.sh@864 -- # return 0 00:45:22.425 11:27:29 chaining -- bdev/chaining.sh@155 -- # rpc_cmd 00:45:22.425 11:27:29 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:22.425 11:27:29 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:22.993 malloc0 00:45:22.993 true 00:45:22.993 true 00:45:22.993 [2024-07-25 11:27:29.861785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:45:22.993 [2024-07-25 11:27:29.861854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:45:22.993 [2024-07-25 11:27:29.861882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600003f680 00:45:22.993 [2024-07-25 11:27:29.861903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:45:22.993 [2024-07-25 11:27:29.863489] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:45:22.993 [2024-07-25 11:27:29.863528] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:45:22.993 pt0 00:45:22.993 [2024-07-25 11:27:29.869832] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:45:22.993 crypto0 00:45:22.994 [2024-07-25 11:27:29.877833] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key1" 00:45:22.994 crypto1 00:45:22.994 11:27:29 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:22.994 11:27:29 chaining -- bdev/chaining.sh@166 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:45:22.994 Running I/O for 5 seconds... 00:45:28.267 00:45:28.267 Latency(us) 00:45:28.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:28.267 Job: crypto1 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:45:28.267 Verification LBA range: start 0x0 length 0x2000 00:45:28.267 crypto1 : 5.02 8798.08 34.37 0.00 0.00 29000.04 4141.88 18979.23 00:45:28.267 =================================================================================================================== 00:45:28.267 Total : 8798.08 34.37 0.00 0.00 29000.04 4141.88 18979.23 00:45:28.267 0 00:45:28.267 11:27:35 chaining -- bdev/chaining.sh@167 -- # killprocess 3886007 00:45:28.267 11:27:35 chaining -- common/autotest_common.sh@950 -- # '[' -z 3886007 ']' 00:45:28.267 11:27:35 chaining -- common/autotest_common.sh@954 -- # kill -0 3886007 00:45:28.267 11:27:35 chaining -- common/autotest_common.sh@955 -- # uname 00:45:28.267 11:27:35 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:28.267 11:27:35 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3886007 00:45:28.267 11:27:35 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:28.267 11:27:35 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:28.267 11:27:35 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3886007' 00:45:28.267 killing process with pid 3886007 00:45:28.267 11:27:35 chaining -- common/autotest_common.sh@969 -- # kill 3886007 00:45:28.267 Received shutdown signal, test time was about 5.000000 seconds 00:45:28.267 00:45:28.267 Latency(us) 00:45:28.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:28.267 =================================================================================================================== 00:45:28.267 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:28.267 11:27:35 chaining -- common/autotest_common.sh@974 -- # wait 3886007 00:45:30.171 11:27:36 chaining -- bdev/chaining.sh@169 -- # trap - SIGINT SIGTERM EXIT 00:45:30.171 11:27:36 chaining -- bdev/chaining.sh@170 -- # killprocess 3886007 00:45:30.171 11:27:36 chaining -- common/autotest_common.sh@950 -- # '[' -z 3886007 ']' 00:45:30.171 11:27:36 chaining -- common/autotest_common.sh@954 -- # kill -0 3886007 00:45:30.171 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3886007) - No such process 00:45:30.171 11:27:36 chaining -- common/autotest_common.sh@977 -- # echo 'Process with pid 3886007 is not found' 00:45:30.171 Process with pid 3886007 is not found 00:45:30.171 11:27:36 chaining -- bdev/chaining.sh@171 -- # wait 3886007 00:45:30.171 11:27:36 chaining -- bdev/chaining.sh@175 -- # nvmftestinit 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@448 -- # prepare_net_devs 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@410 -- # local -g is_hw=no 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@412 -- # remove_spdk_ns 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:30.171 11:27:36 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:30.171 11:27:36 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@285 -- # xtrace_disable 00:45:30.171 11:27:36 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@291 -- # pci_devs=() 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@291 -- # local -a pci_devs 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@292 -- # pci_net_devs=() 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@293 -- # pci_drivers=() 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@293 -- # local -A pci_drivers 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@295 -- # net_devs=() 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@295 -- # local -ga net_devs 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@296 -- # e810=() 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@296 -- # local -ga e810 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@297 -- # x722=() 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@297 -- # local -ga x722 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@298 -- # mlx=() 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@298 -- # local -ga mlx 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.0 (0x8086 - 0x159b)' 00:45:30.171 Found 0000:20:00.0 (0x8086 - 0x159b) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@341 -- # echo 'Found 0000:20:00.1 (0x8086 - 0x159b)' 00:45:30.171 Found 0000:20:00.1 (0x8086 - 0x159b) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.0: cvl_0_0' 00:45:30.171 Found net devices under 0000:20:00.0: cvl_0_0 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@390 -- # [[ up == up ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:20:00.1: cvl_0_1' 00:45:30.171 Found net devices under 0000:20:00.1: cvl_0_1 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@414 -- # is_hw=yes 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:30.171 11:27:36 chaining -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:30.172 11:27:36 chaining -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:45:30.172 11:27:36 chaining -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:30.172 11:27:36 chaining -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:30.172 11:27:36 chaining -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:45:30.172 11:27:36 chaining -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:45:30.172 11:27:36 chaining -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:45:30.172 11:27:36 chaining -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:45:30.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:30.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:45:30.172 00:45:30.172 --- 10.0.0.2 ping statistics --- 00:45:30.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:30.172 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:30.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:30.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:45:30.172 00:45:30.172 --- 10.0.0.1 ping statistics --- 00:45:30.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:30.172 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@422 -- # return 0 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:45:30.172 11:27:37 chaining -- bdev/chaining.sh@176 -- # nvmfappstart -m 0x2 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:45:30.172 11:27:37 chaining -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:30.172 11:27:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@481 -- # nvmfpid=3887460 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:45:30.172 11:27:37 chaining -- nvmf/common.sh@482 -- # waitforlisten 3887460 00:45:30.172 11:27:37 chaining -- common/autotest_common.sh@831 -- # '[' -z 3887460 ']' 00:45:30.172 11:27:37 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:45:30.172 11:27:37 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:30.172 11:27:37 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:45:30.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:45:30.172 11:27:37 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:30.172 11:27:37 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:30.431 [2024-07-25 11:27:37.354079] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:45:30.431 [2024-07-25 11:27:37.354203] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:45:30.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.431 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:30.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.431 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:30.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.431 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:30.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.431 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:30.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.431 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:30.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.431 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:30.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.431 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:30.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.431 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:30.431 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:30.432 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:30.432 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:30.691 [2024-07-25 11:27:37.580109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:30.950 [2024-07-25 11:27:37.856683] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:45:30.950 [2024-07-25 11:27:37.856733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:45:30.950 [2024-07-25 11:27:37.856753] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:45:30.950 [2024-07-25 11:27:37.856768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:45:30.950 [2024-07-25 11:27:37.856784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:45:30.950 [2024-07-25 11:27:37.856825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@864 -- # return 0 00:45:31.549 11:27:38 chaining -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@730 -- # xtrace_disable 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:31.549 11:27:38 chaining -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:45:31.549 11:27:38 chaining -- bdev/chaining.sh@178 -- # rpc_cmd 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@561 -- # xtrace_disable 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:31.549 malloc0 00:45:31.549 [2024-07-25 11:27:38.483107] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:45:31.549 [2024-07-25 11:27:38.499350] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:45:31.549 11:27:38 chaining -- bdev/chaining.sh@186 -- # trap 'bperfcleanup || :; nvmftestfini || :; exit 1' SIGINT SIGTERM EXIT 00:45:31.549 11:27:38 chaining -- bdev/chaining.sh@189 -- # bperfpid=3887641 00:45:31.549 11:27:38 chaining -- bdev/chaining.sh@187 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 4096 -q 256 --wait-for-rpc -z 00:45:31.549 11:27:38 chaining -- bdev/chaining.sh@191 -- # waitforlisten 3887641 /var/tmp/bperf.sock 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@831 -- # '[' -z 3887641 ']' 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:31.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:31.549 11:27:38 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:31.549 [2024-07-25 11:27:38.613550] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:45:31.549 [2024-07-25 11:27:38.613672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887641 ] 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:31.808 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:31.808 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:31.808 [2024-07-25 11:27:38.839622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:32.067 [2024-07-25 11:27:39.121364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:32.634 11:27:39 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:32.634 11:27:39 chaining -- common/autotest_common.sh@864 -- # return 0 00:45:32.634 11:27:39 chaining -- bdev/chaining.sh@192 -- # rpc_bperf 00:45:32.634 11:27:39 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:45:33.201 [2024-07-25 11:27:40.243626] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:45:33.201 nvme0n1 00:45:33.201 true 00:45:33.201 crypto0 00:45:33.201 11:27:40 chaining -- bdev/chaining.sh@201 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:33.459 Running I/O for 5 seconds... 00:45:38.726 00:45:38.726 Latency(us) 00:45:38.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:38.726 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 256, IO size: 4096) 00:45:38.726 Verification LBA range: start 0x0 length 0x2000 00:45:38.726 crypto0 : 5.02 8171.61 31.92 0.00 0.00 31221.81 3879.73 26214.40 00:45:38.726 =================================================================================================================== 00:45:38.726 Total : 8171.61 31.92 0.00 0.00 31221.81 3879.73 26214.40 00:45:38.726 0 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@205 -- # get_stat_bperf sequence_executed 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@205 -- # sequence=82076 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@206 -- # get_stat_bperf executed encrypt 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:38.726 11:27:45 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:38.984 11:27:45 chaining -- bdev/chaining.sh@206 -- # encrypt=41038 00:45:38.984 11:27:45 chaining -- bdev/chaining.sh@207 -- # get_stat_bperf executed decrypt 00:45:38.984 11:27:45 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:45:38.984 11:27:45 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:38.984 11:27:45 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:38.984 11:27:45 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:38.984 11:27:45 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:38.984 11:27:45 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:38.984 11:27:45 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:38.984 11:27:45 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:38.984 11:27:45 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:39.243 11:27:46 chaining -- bdev/chaining.sh@207 -- # decrypt=41038 00:45:39.243 11:27:46 chaining -- bdev/chaining.sh@208 -- # get_stat_bperf executed crc32c 00:45:39.243 11:27:46 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:45:39.243 11:27:46 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:39.243 11:27:46 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:39.243 11:27:46 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:45:39.243 11:27:46 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:39.243 11:27:46 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:45:39.243 11:27:46 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:39.243 11:27:46 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:39.243 11:27:46 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:45:39.502 11:27:46 chaining -- bdev/chaining.sh@208 -- # crc32c=82076 00:45:39.502 11:27:46 chaining -- bdev/chaining.sh@210 -- # (( sequence > 0 )) 00:45:39.502 11:27:46 chaining -- bdev/chaining.sh@211 -- # (( encrypt + decrypt == sequence )) 00:45:39.502 11:27:46 chaining -- bdev/chaining.sh@212 -- # (( encrypt + decrypt == crc32c )) 00:45:39.502 11:27:46 chaining -- bdev/chaining.sh@214 -- # killprocess 3887641 00:45:39.502 11:27:46 chaining -- common/autotest_common.sh@950 -- # '[' -z 3887641 ']' 00:45:39.502 11:27:46 chaining -- common/autotest_common.sh@954 -- # kill -0 3887641 00:45:39.502 11:27:46 chaining -- common/autotest_common.sh@955 -- # uname 00:45:39.502 11:27:46 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:39.502 11:27:46 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3887641 00:45:39.502 11:27:46 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:39.502 11:27:46 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:39.502 11:27:46 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3887641' 00:45:39.502 killing process with pid 3887641 00:45:39.502 11:27:46 chaining -- common/autotest_common.sh@969 -- # kill 3887641 00:45:39.502 Received shutdown signal, test time was about 5.000000 seconds 00:45:39.502 00:45:39.502 Latency(us) 00:45:39.502 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:39.502 =================================================================================================================== 00:45:39.502 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:39.502 11:27:46 chaining -- common/autotest_common.sh@974 -- # wait 3887641 00:45:41.404 11:27:48 chaining -- bdev/chaining.sh@219 -- # bperfpid=3889231 00:45:41.404 11:27:48 chaining -- bdev/chaining.sh@217 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bperf.sock -t 5 -w verify -o 65536 -q 32 --wait-for-rpc -z 00:45:41.404 11:27:48 chaining -- bdev/chaining.sh@221 -- # waitforlisten 3889231 /var/tmp/bperf.sock 00:45:41.404 11:27:48 chaining -- common/autotest_common.sh@831 -- # '[' -z 3889231 ']' 00:45:41.404 11:27:48 chaining -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:45:41.404 11:27:48 chaining -- common/autotest_common.sh@836 -- # local max_retries=100 00:45:41.404 11:27:48 chaining -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:45:41.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:45:41.404 11:27:48 chaining -- common/autotest_common.sh@840 -- # xtrace_disable 00:45:41.404 11:27:48 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:41.404 [2024-07-25 11:27:48.270059] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:45:41.404 [2024-07-25 11:27:48.270204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889231 ] 00:45:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.404 EAL: Requested device 0000:3d:01.0 cannot be used 00:45:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.404 EAL: Requested device 0000:3d:01.1 cannot be used 00:45:41.404 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.404 EAL: Requested device 0000:3d:01.2 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:01.3 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:01.4 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:01.5 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:01.6 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:01.7 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:02.0 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:02.1 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:02.2 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:02.3 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:02.4 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:02.5 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:02.6 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3d:02.7 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:01.0 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:01.1 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:01.2 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:01.3 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:01.4 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:01.5 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:01.6 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:01.7 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:02.0 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:02.1 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:02.2 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:02.3 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:02.4 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:02.5 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:02.6 cannot be used 00:45:41.405 qat_pci_device_allocate(): Reached maximum number of QAT devices 00:45:41.405 EAL: Requested device 0000:3f:02.7 cannot be used 00:45:41.405 [2024-07-25 11:27:48.499227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:41.664 [2024-07-25 11:27:48.781397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:45:42.231 11:27:49 chaining -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:45:42.231 11:27:49 chaining -- common/autotest_common.sh@864 -- # return 0 00:45:42.231 11:27:49 chaining -- bdev/chaining.sh@222 -- # rpc_bperf 00:45:42.231 11:27:49 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock 00:45:42.798 [2024-07-25 11:27:49.889984] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "key0" 00:45:42.798 nvme0n1 00:45:42.798 true 00:45:42.798 crypto0 00:45:43.056 11:27:49 chaining -- bdev/chaining.sh@231 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:45:43.056 Running I/O for 5 seconds... 00:45:48.323 00:45:48.323 Latency(us) 00:45:48.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:48.323 Job: crypto0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:45:48.323 Verification LBA range: start 0x0 length 0x200 00:45:48.323 crypto0 : 5.01 1670.46 104.40 0.00 0.00 18773.38 845.41 20237.52 00:45:48.323 =================================================================================================================== 00:45:48.323 Total : 1670.46 104.40 0.00 0.00 18773.38 845.41 20237.52 00:45:48.323 0 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@233 -- # get_stat_bperf sequence_executed 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@48 -- # get_stat sequence_executed '' rpc_bperf 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@39 -- # event=sequence_executed 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@39 -- # opcode= 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@40 -- # [[ -z '' ]] 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@41 -- # rpc_bperf accel_get_stats 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@41 -- # jq -r .sequence_executed 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@233 -- # sequence=16726 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@234 -- # get_stat_bperf executed encrypt 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@48 -- # get_stat executed encrypt rpc_bperf 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@39 -- # opcode=encrypt 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@40 -- # [[ -z encrypt ]] 00:45:48.323 11:27:55 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:48.324 11:27:55 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:48.324 11:27:55 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "encrypt").executed' 00:45:48.581 11:27:55 chaining -- bdev/chaining.sh@234 -- # encrypt=8363 00:45:48.581 11:27:55 chaining -- bdev/chaining.sh@235 -- # get_stat_bperf executed decrypt 00:45:48.581 11:27:55 chaining -- bdev/chaining.sh@48 -- # get_stat executed decrypt rpc_bperf 00:45:48.581 11:27:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:48.581 11:27:55 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:48.581 11:27:55 chaining -- bdev/chaining.sh@39 -- # opcode=decrypt 00:45:48.581 11:27:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:48.581 11:27:55 chaining -- bdev/chaining.sh@40 -- # [[ -z decrypt ]] 00:45:48.581 11:27:55 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:48.581 11:27:55 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "decrypt").executed' 00:45:48.581 11:27:55 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:48.839 11:27:55 chaining -- bdev/chaining.sh@235 -- # decrypt=8363 00:45:48.839 11:27:55 chaining -- bdev/chaining.sh@236 -- # get_stat_bperf executed crc32c 00:45:48.839 11:27:55 chaining -- bdev/chaining.sh@48 -- # get_stat executed crc32c rpc_bperf 00:45:48.839 11:27:55 chaining -- bdev/chaining.sh@37 -- # local event opcode rpc 00:45:48.839 11:27:55 chaining -- bdev/chaining.sh@39 -- # event=executed 00:45:48.839 11:27:55 chaining -- bdev/chaining.sh@39 -- # opcode=crc32c 00:45:48.839 11:27:55 chaining -- bdev/chaining.sh@39 -- # rpc=rpc_bperf 00:45:48.839 11:27:55 chaining -- bdev/chaining.sh@40 -- # [[ -z crc32c ]] 00:45:48.839 11:27:55 chaining -- bdev/chaining.sh@43 -- # rpc_bperf accel_get_stats 00:45:48.839 11:27:55 chaining -- bdev/chaining.sh@44 -- # jq -r '.operations[] | select(.opcode == "crc32c").executed' 00:45:48.839 11:27:55 chaining -- bdev/chaining.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:45:49.097 11:27:55 chaining -- bdev/chaining.sh@236 -- # crc32c=16726 00:45:49.097 11:27:55 chaining -- bdev/chaining.sh@238 -- # (( sequence > 0 )) 00:45:49.097 11:27:55 chaining -- bdev/chaining.sh@239 -- # (( encrypt + decrypt == sequence )) 00:45:49.097 11:27:56 chaining -- bdev/chaining.sh@240 -- # (( encrypt + decrypt == crc32c )) 00:45:49.097 11:27:56 chaining -- bdev/chaining.sh@242 -- # killprocess 3889231 00:45:49.097 11:27:56 chaining -- common/autotest_common.sh@950 -- # '[' -z 3889231 ']' 00:45:49.097 11:27:56 chaining -- common/autotest_common.sh@954 -- # kill -0 3889231 00:45:49.097 11:27:56 chaining -- common/autotest_common.sh@955 -- # uname 00:45:49.097 11:27:56 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:49.097 11:27:56 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3889231 00:45:49.097 11:27:56 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:45:49.097 11:27:56 chaining -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:45:49.097 11:27:56 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3889231' 00:45:49.097 killing process with pid 3889231 00:45:49.097 11:27:56 chaining -- common/autotest_common.sh@969 -- # kill 3889231 00:45:49.097 Received shutdown signal, test time was about 5.000000 seconds 00:45:49.097 00:45:49.097 Latency(us) 00:45:49.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:45:49.097 =================================================================================================================== 00:45:49.097 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:45:49.097 11:27:56 chaining -- common/autotest_common.sh@974 -- # wait 3889231 00:45:50.995 11:27:57 chaining -- bdev/chaining.sh@243 -- # nvmftestfini 00:45:50.995 11:27:57 chaining -- nvmf/common.sh@488 -- # nvmfcleanup 00:45:50.995 11:27:57 chaining -- nvmf/common.sh@117 -- # sync 00:45:50.995 11:27:57 chaining -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:45:50.995 11:27:57 chaining -- nvmf/common.sh@120 -- # set +e 00:45:50.995 11:27:57 chaining -- nvmf/common.sh@121 -- # for i in {1..20} 00:45:50.995 11:27:57 chaining -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:45:50.995 rmmod nvme_tcp 00:45:50.995 rmmod nvme_fabrics 00:45:50.995 rmmod nvme_keyring 00:45:50.995 11:27:57 chaining -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:45:50.995 11:27:57 chaining -- nvmf/common.sh@124 -- # set -e 00:45:50.995 11:27:57 chaining -- nvmf/common.sh@125 -- # return 0 00:45:50.995 11:27:57 chaining -- nvmf/common.sh@489 -- # '[' -n 3887460 ']' 00:45:50.995 11:27:57 chaining -- nvmf/common.sh@490 -- # killprocess 3887460 00:45:50.995 11:27:57 chaining -- common/autotest_common.sh@950 -- # '[' -z 3887460 ']' 00:45:50.995 11:27:57 chaining -- common/autotest_common.sh@954 -- # kill -0 3887460 00:45:50.995 11:27:57 chaining -- common/autotest_common.sh@955 -- # uname 00:45:50.995 11:27:57 chaining -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:45:50.995 11:27:57 chaining -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3887460 00:45:50.995 11:27:57 chaining -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:45:50.995 11:27:57 chaining -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:45:50.995 11:27:57 chaining -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3887460' 00:45:50.995 killing process with pid 3887460 00:45:50.995 11:27:57 chaining -- common/autotest_common.sh@969 -- # kill 3887460 00:45:50.995 11:27:57 chaining -- common/autotest_common.sh@974 -- # wait 3887460 00:45:52.896 11:27:59 chaining -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:45:52.896 11:27:59 chaining -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:45:52.896 11:27:59 chaining -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:45:52.896 11:27:59 chaining -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:45:52.896 11:27:59 chaining -- nvmf/common.sh@278 -- # remove_spdk_ns 00:45:52.896 11:27:59 chaining -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:52.896 11:27:59 chaining -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:52.896 11:27:59 chaining -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:54.835 11:28:01 chaining -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:45:54.835 11:28:01 chaining -- bdev/chaining.sh@245 -- # trap - SIGINT SIGTERM EXIT 00:45:54.835 00:45:54.835 real 1m13.187s 00:45:54.835 user 1m36.583s 00:45:54.835 sys 0m14.886s 00:45:54.835 11:28:01 chaining -- common/autotest_common.sh@1126 -- # xtrace_disable 00:45:54.835 11:28:01 chaining -- common/autotest_common.sh@10 -- # set +x 00:45:54.835 ************************************ 00:45:54.835 END TEST chaining 00:45:54.835 ************************************ 00:45:54.835 11:28:01 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:45:54.835 11:28:01 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:45:54.835 11:28:01 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:45:54.835 11:28:01 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:45:54.835 11:28:01 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:45:54.835 11:28:01 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:45:54.835 11:28:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:45:54.835 11:28:01 -- common/autotest_common.sh@10 -- # set +x 00:45:54.835 11:28:01 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:45:54.835 11:28:01 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:45:54.835 11:28:01 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:45:54.835 11:28:01 -- common/autotest_common.sh@10 -- # set +x 00:46:01.398 INFO: APP EXITING 00:46:01.398 INFO: killing all VMs 00:46:01.398 INFO: killing vhost app 00:46:01.398 INFO: EXIT DONE 00:46:04.685 Waiting for block devices as requested 00:46:04.685 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:04.685 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:04.685 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:04.685 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:04.685 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:04.685 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:04.943 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:04.943 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:04.943 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:46:04.943 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:46:05.202 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:46:05.202 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:46:05.202 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:46:05.461 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:46:05.461 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:46:05.461 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:46:05.719 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:46:10.987 Cleaning 00:46:10.987 Removing: /var/run/dpdk/spdk0/config 00:46:10.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:46:10.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:46:10.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:46:10.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:46:10.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:46:10.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:46:10.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:46:10.987 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:46:10.987 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:46:10.987 Removing: /var/run/dpdk/spdk0/hugepage_info 00:46:10.987 Removing: /dev/shm/nvmf_trace.0 00:46:10.987 Removing: /dev/shm/spdk_tgt_trace.pid3466128 00:46:10.987 Removing: /var/run/dpdk/spdk0 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3457979 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3462818 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3466128 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3467366 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3468833 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3469775 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3471402 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3471681 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3472583 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3476964 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3479731 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3480417 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3481421 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3482529 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3483393 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3483929 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3484219 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3484688 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3485836 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3489575 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3490052 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3490442 00:46:10.987 Removing: /var/run/dpdk/spdk_pid3491605 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3492071 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3492943 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3493510 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3494056 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3494606 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3495210 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3495860 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3496496 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3497044 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3497590 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3498148 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3498696 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3499243 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3499835 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3500446 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3501098 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3501681 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3502229 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3502782 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3503337 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3503885 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3504435 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3505499 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3506218 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3506854 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3507464 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3508211 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3508859 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3509573 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3510150 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3510722 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3511845 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3513011 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3514346 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3515161 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3521478 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3524452 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3527575 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3529303 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3531518 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3532596 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3532779 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3533057 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3538312 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3539248 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3540872 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3541650 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3554421 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3556746 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3558721 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3564630 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3566883 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3568305 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3574308 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3577356 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3578771 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3591966 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3595010 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3596571 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3609506 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3612424 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3614082 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3627545 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3631954 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3633415 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3647449 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3650742 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3652429 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3667118 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3670589 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3672270 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3686342 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3691502 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3693607 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3695285 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3699242 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3705996 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3709403 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3715362 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3719771 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3727199 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3730880 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3739250 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3742209 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3750278 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3753240 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3761872 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3764921 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3770323 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3771133 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3772115 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3772967 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3773986 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3775229 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3776434 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3777152 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3779787 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3782385 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3785040 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3787245 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3796536 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3802294 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3804760 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3807162 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3809567 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3811437 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3820391 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3826101 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3827607 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3828763 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3832443 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3835464 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3838769 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3840630 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3842741 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3844038 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3844152 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3844426 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3845486 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3846027 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3848111 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3850834 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3853081 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3854411 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3855748 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3856482 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3856594 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3856854 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3858237 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3859830 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3860902 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3864592 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3867585 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3870461 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3872478 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3874592 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3875825 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3875967 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3880568 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3881309 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3882247 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3882804 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3883374 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3884694 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3886007 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3887641 00:46:10.988 Removing: /var/run/dpdk/spdk_pid3889231 00:46:10.988 Clean 00:46:11.247 11:28:18 -- common/autotest_common.sh@1451 -- # return 0 00:46:11.247 11:28:18 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:46:11.247 11:28:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:11.247 11:28:18 -- common/autotest_common.sh@10 -- # set +x 00:46:11.247 11:28:18 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:46:11.247 11:28:18 -- common/autotest_common.sh@730 -- # xtrace_disable 00:46:11.247 11:28:18 -- common/autotest_common.sh@10 -- # set +x 00:46:11.247 11:28:18 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:46:11.247 11:28:18 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log ]] 00:46:11.247 11:28:18 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/udev.log 00:46:11.247 11:28:18 -- spdk/autotest.sh@395 -- # hash lcov 00:46:11.247 11:28:18 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:46:11.247 11:28:18 -- spdk/autotest.sh@397 -- # hostname 00:46:11.247 11:28:18 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/crypto-phy-autotest/spdk -t spdk-wfp-19 -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info 00:46:11.505 geninfo: WARNING: invalid characters removed from testname! 00:46:38.052 11:28:44 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:41.400 11:28:47 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:43.307 11:28:50 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:45.843 11:28:52 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:48.384 11:28:55 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:50.920 11:28:57 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_total.info 00:46:53.453 11:29:00 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:46:53.453 11:29:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:46:53.453 11:29:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:46:53.453 11:29:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:53.453 11:29:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:53.453 11:29:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.453 11:29:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.453 11:29:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.453 11:29:00 -- paths/export.sh@5 -- $ export PATH 00:46:53.453 11:29:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.453 11:29:00 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:46:53.453 11:29:00 -- common/autobuild_common.sh@447 -- $ date +%s 00:46:53.453 11:29:00 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721899740.XXXXXX 00:46:53.453 11:29:00 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721899740.xZCAiI 00:46:53.453 11:29:00 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:46:53.453 11:29:00 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:46:53.453 11:29:00 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:46:53.453 11:29:00 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:46:53.453 11:29:00 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:46:53.453 11:29:00 -- common/autobuild_common.sh@463 -- $ get_config_params 00:46:53.453 11:29:00 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:46:53.453 11:29:00 -- common/autotest_common.sh@10 -- $ set +x 00:46:53.453 11:29:00 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:46:53.453 11:29:00 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:46:53.453 11:29:00 -- pm/common@17 -- $ local monitor 00:46:53.453 11:29:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:53.453 11:29:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:53.453 11:29:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:53.453 11:29:00 -- pm/common@21 -- $ date +%s 00:46:53.453 11:29:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:53.453 11:29:00 -- pm/common@21 -- $ date +%s 00:46:53.453 11:29:00 -- pm/common@25 -- $ sleep 1 00:46:53.453 11:29:00 -- pm/common@21 -- $ date +%s 00:46:53.453 11:29:00 -- pm/common@21 -- $ date +%s 00:46:53.453 11:29:00 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721899740 00:46:53.453 11:29:00 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721899740 00:46:53.453 11:29:00 -- pm/common@21 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721899740 00:46:53.453 11:29:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721899740 00:46:53.453 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721899740_collect-vmstat.pm.log 00:46:53.453 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721899740_collect-cpu-load.pm.log 00:46:53.453 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721899740_collect-cpu-temp.pm.log 00:46:53.453 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721899740_collect-bmc-pm.bmc.pm.log 00:46:54.391 11:29:01 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:46:54.391 11:29:01 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:46:54.391 11:29:01 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:46:54.391 11:29:01 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:46:54.391 11:29:01 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:46:54.391 11:29:01 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:46:54.391 11:29:01 -- spdk/autopackage.sh@19 -- $ timing_finish 00:46:54.391 11:29:01 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:46:54.391 11:29:01 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:46:54.391 11:29:01 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/timing.txt 00:46:54.391 11:29:01 -- spdk/autopackage.sh@20 -- $ exit 0 00:46:54.391 11:29:01 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:46:54.391 11:29:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:46:54.391 11:29:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:46:54.391 11:29:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:54.391 11:29:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:46:54.391 11:29:01 -- pm/common@44 -- $ pid=3903877 00:46:54.391 11:29:01 -- pm/common@50 -- $ kill -TERM 3903877 00:46:54.391 11:29:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:54.391 11:29:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:46:54.391 11:29:01 -- pm/common@44 -- $ pid=3903879 00:46:54.391 11:29:01 -- pm/common@50 -- $ kill -TERM 3903879 00:46:54.391 11:29:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:54.392 11:29:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:46:54.392 11:29:01 -- pm/common@44 -- $ pid=3903881 00:46:54.392 11:29:01 -- pm/common@50 -- $ kill -TERM 3903881 00:46:54.392 11:29:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:46:54.392 11:29:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:46:54.392 11:29:01 -- pm/common@44 -- $ pid=3903897 00:46:54.392 11:29:01 -- pm/common@50 -- $ sudo -E kill -TERM 3903897 00:46:54.392 + [[ -n 3325930 ]] 00:46:54.392 + sudo kill 3325930 00:46:54.401 [Pipeline] } 00:46:54.422 [Pipeline] // stage 00:46:54.429 [Pipeline] } 00:46:54.448 [Pipeline] // timeout 00:46:54.455 [Pipeline] } 00:46:54.472 [Pipeline] // catchError 00:46:54.479 [Pipeline] } 00:46:54.498 [Pipeline] // wrap 00:46:54.505 [Pipeline] } 00:46:54.523 [Pipeline] // catchError 00:46:54.534 [Pipeline] stage 00:46:54.537 [Pipeline] { (Epilogue) 00:46:54.554 [Pipeline] catchError 00:46:54.557 [Pipeline] { 00:46:54.574 [Pipeline] echo 00:46:54.576 Cleanup processes 00:46:54.584 [Pipeline] sh 00:46:54.869 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:46:54.869 3903982 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/sdr.cache 00:46:54.869 3904324 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:46:54.885 [Pipeline] sh 00:46:55.169 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:46:55.170 ++ grep -v 'sudo pgrep' 00:46:55.170 ++ awk '{print $1}' 00:46:55.170 + sudo kill -9 3903982 00:46:55.182 [Pipeline] sh 00:46:55.468 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:46:55.468 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:47:05.482 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:47:10.754 [Pipeline] sh 00:47:11.036 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:47:11.036 Artifacts sizes are good 00:47:11.050 [Pipeline] archiveArtifacts 00:47:11.056 Archiving artifacts 00:47:11.240 [Pipeline] sh 00:47:11.524 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:47:11.538 [Pipeline] cleanWs 00:47:11.548 [WS-CLEANUP] Deleting project workspace... 00:47:11.548 [WS-CLEANUP] Deferred wipeout is used... 00:47:11.555 [WS-CLEANUP] done 00:47:11.556 [Pipeline] } 00:47:11.577 [Pipeline] // catchError 00:47:11.590 [Pipeline] sh 00:47:11.871 + logger -p user.info -t JENKINS-CI 00:47:11.880 [Pipeline] } 00:47:11.897 [Pipeline] // stage 00:47:11.903 [Pipeline] } 00:47:11.922 [Pipeline] // node 00:47:11.928 [Pipeline] End of Pipeline 00:47:11.956 Finished: SUCCESS